96 Commits

Author SHA1 Message Date
iven
891d972e20 docs: wiki/log 审计修复记录
Some checks are pending
CI / Lint & TypeCheck (push) Waiting to run
CI / Unit Tests (push) Waiting to run
CI / Build Frontend (push) Waiting to run
CI / Rust Check (push) Waiting to run
CI / Security Scan (push) Waiting to run
CI / E2E Tests (push) Blocked by required conditions
2026-04-19 13:46:09 +08:00
iven
e12766794b fix(relay,store): 审计修复 — 自动恢复可达化 + 类型化错误 + 全路径重连
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
C1: mark_key_429 设 is_active=FALSE,使 select_best_key 自动恢复
路径真正可达。之前 429 只设 cooldown_until,恢复代码为死代码。

H1+H2: 重试查询补全 debug 日志(RPM/TPM 跳过、解密失败)+ 修复
fallthrough 错误信息(RateLimited 而非 NotFound)。

H3+H4+M3+M4+M5: agentStore.ts 提取 classifyAgentError() 类型化错误
分类,覆盖 502/503/401/403/429/500,统一 createClone/
createFromTemplate/updateClone/deleteClone 错误处理,不再泄露原始
错误详情。所有 catch 块添加 log.error。

H5+H6: auth.ts 提取 triggerReconnect() 共享函数,login/loginWithTotp/
restoreSession 三处统一调用。状态检查改为仅 'disconnected' 时触发,
避免 connecting/reconnecting 状态下并发 connect。

M1: toggle_key_active(true) 同步清除 cooldown_until,防止管理员
激活后 key 仍被 cooldown 过滤不可见。
2026-04-19 13:45:49 +08:00
iven
d9f8850083 docs: wiki/log 更新发布前审计 5 项修复记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-19 13:28:05 +08:00
iven
0bd50aad8c fix(heartbeat,skills): 健康快照降级处理 + 技能加载重试
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P1-3: health_snapshot 在 heartbeat engine 未初始化时不再报错,
返回 pending 状态快照,避免 HealthPanel 竞态报错。

P1-1: loadSkillsCatalog 新增 Path C 延迟重试(最多2次,间隔
1.5s/3s),解决 kernel 初始化未完成时 skills 返回空数组的问题。
2026-04-19 13:27:25 +08:00
iven
4ee587d070 fix(relay,store): Provider Key 自动恢复 + Agent 创建友好错误 + 登录后重连
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P0-1: key_pool.rs 新增 cooldown 过期 Key 自动恢复逻辑。
当所有 Key 的 is_active=false 且 cooldown_until 已过期时,
自动重新激活并重试选择,避免 relay/models 返回空数组导致聊天失败。

P0-2: agentStore.ts createClone/createFromTemplate 错误信息
从原始 HTTP 错误改为可操作的中文提示(502/503/401 分类处理)。

P1-2: auth.ts login 成功后触发 connectionStore.connect(),
确保 kernel 使用新 JWT 而非旧 token。
2026-04-19 13:16:12 +08:00
iven
8b1b08be82 docs: sync TRUTH.md + wiki/log for Batch 3/8 completion
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
TRUTH.md: update date, add workspace test count 797
wiki/log.md: append 2026-04-19 entry for sqlx upgrade + test coverage
2026-04-19 11:26:24 +08:00
iven
beeb529d8f test(protocols,skills): add 90 tests for MCP types + skill loader/runner
zclaw-protocols: +43 tests covering mcp_types serde, ContentBlock
variants, transport config builders, and domain type roundtrips.

zclaw-skills: +47 tests covering SKILL.md/TOML parsing, auto-classify,
PromptOnlySkill execution, and SkillManifest/SkillResult roundtrips.

Batch 8 of audit plan (plans/stateless-petting-rossum.md).
2026-04-19 11:24:57 +08:00
iven
226beb708b Merge branch 'chore/sqlx-0.8-upgrade'
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
sqlx 0.7→0.8 unified, resolves dual-version from pgvector.
2026-04-19 11:15:17 +08:00
iven
dc7a1d5400 chore(deps): upgrade sqlx 0.7→0.8 + libsqlite3-sys 0.27→0.30
Unifies dual sqlx versions caused by pgvector 0.4 pulling sqlx 0.8.x
as indirect dependency. Zero source code changes required, 719/719
tests pass.

Batch 3 of audit plan (plans/stateless-petting-rossum.md).
2026-04-19 11:15:05 +08:00
iven
d9b0b4f4f7 fix(audit): Batch 7-9 dead_code 标注 + TODO 清理 + 文档同步
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Batch 7: dead_code 标注统一 (16 处)
- crates/ 9 处: growth, kernel, pipeline, runtime, saas, skills
- src-tauri/ 7 处: classroom, intelligence, browser, mcp
- 统一格式: #[allow(dead_code)] // @reserved: <原因>

Batch 7+: EvolutionEngine L2/L3 10 个未使用 pub 函数
- 全部标注 @reserved: EvolutionEngine L2/L3, post-release integration

Batch 9: TODO → FUTURE 标记 (4 处)
- html.rs: template-based export
- nl_schedule.rs: LLM-assisted parsing
- knowledge/handlers.rs: category_id from upload
- personality_detector.rs: VikingStorage persistence

Batch 5+: Cargo.lock 更新 (serde_yaml_bw 迁移)

全量测试通过: 719 passed, 0 failed
2026-04-19 08:54:57 +08:00
iven
edd6dd5fc8 fix(audit): Batch 4-6 中间件注释 + 依赖迁移 + 安全加固
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Batch 4:
- kernel/mod.rs: 添加中间件注册顺序≠执行顺序注释
- EvolutionMiddleware 注册处标注 priority=78

Batch 5:
- desktop/src-tauri/Cargo.toml: serde_yaml 0.9 (deprecated) → serde_yaml_bw 2.x

Batch 6:
- saas/main.rs: CORS 开发模式改为显式 localhost origins (修复 Any+credentials 违规)
- docker-compose.yml: 移除默认弱密码 your_secure_password,改为必填校验
- director.rs: 用户输入添加 <user_input>/<user_request> 边界标记防注入

全量测试通过: 719 passed, 0 failed
2026-04-19 08:46:12 +08:00
iven
4329bae1ea fix(audit): Batch 2 生产代码 unwrap 替换 (20 处)
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P0 修复:
- viking_commands.rs: URI 路径构建 unwrap → ok_or_else 错误传播
- clip.rs: 临时文件路径 unwrap → ok_or_else (防 Windows 中文路径 panic)

P1 修复:
- personality_detector.rs: Mutex lock unwrap → unwrap_or_else 防中毒传播
- pptx.rs: HashMap.get unwrap → expect (来自 keys() 迭代)

P2 修复:
- 4 处 SystemTime.unwrap → expect("system clock is valid")
- 4 处 dev_server URL.parse.unwrap → expect("hardcoded URL is valid")
- 9 处 nl_schedule Regex.unwrap → expect("static regex is valid")
- 5 处 data_masking Regex.unwrap → expect("static regex is valid")
- 2 处 pipeline/state Regex.unwrap → expect("static regex is valid")

全量测试通过: 719 passed, 0 failed
2026-04-19 08:38:09 +08:00
iven
924ad5a6ec fix(audit): Batch 0-1 文档校准 + let _ = 静默错误修复
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Batch 0:
- TRUTH.md 中间件层 14→15 (补 EvolutionMiddleware@78)
- wiki/middleware.md 同步 15 层 + 优先级分类更新
- Store 数字确认 25 个

Batch 1:
- approvals.rs: 3 处 map_err+let _ = 简化为 if let Err
- director.rs: oneshot send 失败添加 debug 日志
- task.rs: 4 处子任务状态更新添加 debug 日志
- chat.rs: 流消息发送和事件 emit 添加 warn/debug 日志
- heartbeat.rs: 告警广播添加 debug 日志 + break 优化

全量测试通过: 719 passed, 0 failed
2026-04-19 08:30:33 +08:00
iven
e94235c4f9 fix(growth): Evolution Engine 穷尽审计 3CRITICAL + 3HIGH 全部修复
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
C-01: ExperienceExtractor 接入 ExperienceStore
- GrowthIntegration.new() 创建 ExperienceExtractor 时注入 ExperienceStore
- 经验持久化路径打通:extract_combined → persist_experiences → ExperienceStore

C-02+C-03: 进化触发链路全链路接通
- create_middleware_chain() 注册 EvolutionMiddleware (priority 78)
- MemoryMiddleware 持有 Arc<EvolutionMiddleware> 共享引用
- after_completion 中调用 check_evolution() → 推送 PendingEvolution
- EvolutionMiddleware 在下次对话前注入进化建议到 system prompt

H-01: FeedbackCollector loaded 标志修复
- load() 失败时保留 loaded=false,下次 save 重试
- 日志级别 debug → warn

H-03: FeedbackCollector 内部可变性
- EvolutionEngine.feedback 改为 Arc<Mutex<FeedbackCollector>>
- submit_feedback() 从 &mut self → &self,支持中间件 &self 调用路径
- GrowthIntegration.initialize() 从 &mut self → &self

H-05: 删除空测试 test_parse_empty_response (无 assert)

H-06: infer_experiences_from_memories() fallback
- Outcome::Success → Outcome::Partial (反映推断不确定性)
2026-04-19 00:43:02 +08:00
iven
72b3206a6b fix(growth): AUD-11 反馈信任度启动集成 + AUD-12 日志格式优化
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
AUD-11: FeedbackCollector 内部 lazy-load 机制
- save() 首次调用自动从持久化存储加载信任度记录
- load() 使用 or_insert 策略,不覆盖内存中较新记录
- GrowthIntegration.initialize() 保留为可选优化入口
- 移除无法在 &self 中间件中使用的 ensure_feedback_loaded

AUD-12: 日志格式优化
- ProfileSignals 新增 signal_count() 方法
- extractor.rs 使用 signal_count() 替代 has_any_signal() as usize
2026-04-19 00:15:50 +08:00
iven
0fd78ac321 fix: 全面审计修复 — P0 功能缺陷 + P1 代码质量
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P0 功能修复:
- stats: Admin V2 仪表盘 API 路径修正 (/stats/dashboard → /admin/dashboard)
- mcp: 桌面端 MCP 插件增加 isTauriRuntime() 守卫,避免浏览器模式崩溃
- admin: 侧边栏高亮逻辑修复 (startsWith → 精确匹配+子路径)

P1 代码质量:
- 删除 workflowBuilderStore.ts 死代码 (456行,零引用)
- sqlite.rs 3 处 SQL 静默失败改为 tracing::warn! 日志
- mcp_tool_adapter 2 处 unwrap 改为安全回退
- orchestration_execute 添加 @reserved 标注
- TRUTH.md 测试数字校准 (734→803),Store 数 26→25
2026-04-18 23:57:03 +08:00
iven
ab4d06c4d6 fix(growth): 审计修复 — CRITICAL 编译错误 + LOW 静默数据丢失
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- CRITICAL: extraction_adapter.rs extract_with_prompt() 使用不存在的
  zclaw_types::Error::msg(),改为 ZclawError::InvalidInput/ZclawError::LlmError
- LOW: feedback_collector.rs save() 中 serde_json::to_string().unwrap_or_default()
  改为显式错误处理 + warn 日志 + continue,避免静默存空数据
2026-04-18 23:30:58 +08:00
iven
1595290db2 fix(growth): MEDIUM-12 ProfileUpdater 补齐 5 维度画像更新
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
根因: ProfileUpdater 只处理 industry 和 communication_style 2/5 维度,
跳过 recent_topic、pain_point、preferred_tool。

修复:
- ProfileFieldUpdate 添加 kind 字段 (SetField | AppendArray)
- collect_updates() 现在处理全部 5 个维度:
  - industry, communication_style → SetField (直接覆盖)
  - recent_topic, pain_point, preferred_tool → AppendArray (追加去重)
- growth.rs 根据 ProfileUpdateKind 分派到不同的 UserProfileStore 方法:
  - SetField → update_field()
  - AppendArray → add_recent_topic() / add_pain_point() / add_preferred_tool()
- ProfileUpdateKind re-exported from lib.rs

测试: test_collect_updates_all_five_dimensions 验证 5 个维度 + 2 种更新类型
2026-04-18 23:07:31 +08:00
iven
2c0602e0e6 fix(growth): HIGH-3 FeedbackCollector 信任度持久化
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
根因: FeedbackCollector 用纯内存 HashMap 存储信任度记录,重启后归零。

修复:
- FeedbackCollector 添加 viking: Option<Arc<VikingAdapter>> 字段
- 添加 with_viking() 构造器
- 添加 save(): 遍历 trust_records → MemoryEntry → VikingAdapter 存储
- 添加 load(): find_by_prefix 反序列化回 HashMap
- EvolutionEngine::new()/from_experience_store() 传入 VikingAdapter
- submit_feedback() 改为 async,提交后自动调用 save()
- 添加 load_feedback() 供启动时恢复

测试: save_and_load_roundtrip + load_without_viking + save_without_viking
2026-04-18 23:03:31 +08:00
iven
f358f14f12 fix(growth): 穷尽审计修复 — tracker timeline 断链 + 文档更新
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P1-1: tracker.rs record_learning 改为通过 MemoryEntry 存储
      (之前用 store_metadata 存但 get_timeline 用 find_by_prefix 读,
       两条路径不交叉,timeline 永远返回空)

P2-4: extractor.rs 移除未使用的 _llm_driver 绑定,改为 is_none() 检查

P2-5: lib.rs 模块文档更新,反映实际 17 个模块而非原始 4 个

profile_updater.rs: 添加注释说明只收集 update_field 支持的字段

测试: zclaw-growth 137 tests, zclaw-runtime 87 tests, 0 failures
2026-04-18 23:01:04 +08:00
iven
7cdcfaddb0 fix(growth): MEDIUM-10 Experience 添加 tool_used 字段
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
根因: Experience 结构体没有 tool_used 字段,PatternAggregator 从
context 字段提取工具名(语义混淆),导致工具信息不准确。

修复:
- experience_store.rs: Experience 添加 tool_used: Option<String> 字段
  (#[serde(default)] 兼容旧数据),Experience::new() 初始化为 None
- experience_extractor.rs: persist_experiences() 从 ExperienceCandidate
  的 tools_used[0] 填充 tool_used,同时填充 industry_context
- pattern_aggregator.rs: 改用 tool_used 字段提取工具名,不再误用 context
- store_experience() 将 tool_used 加入 keywords 提升搜索命中率
2026-04-18 22:58:47 +08:00
iven
3c6581f915 fix(growth): HIGH-6 修复 extract_combined 合并提取空壳
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
根因: growth.rs 构造 CombinedExtraction 时硬编码 experiences: Vec::new()
和 profile_signals: default(),导致 L1 结构化经验不被提取、L2 技能进化
没有输入数据、整个进化引擎无法端到端工作。

修复:
- extractor.rs: 添加 COMBINED_EXTRACTION_PROMPT 统一 prompt,单次 LLM 调用
  同时输出 memories + experiences + profile_signals
- extractor.rs: 添加 parse_combined_response() 解析 LLM JSON 响应
- LlmDriverForExtraction trait: 添加 extract_with_prompt() 方法(默认不支持,
  退化到现有 extract() + 启发式推断)
- MemoryExtractor: 添加 extract_combined() 方法,优先单次调用,失败则退化
- growth.rs: extract_combined() 使用新的合并提取替代硬编码空值
- TauriExtractionDriver: 实现 extract_with_prompt()
- ProfileSignals: 添加 has_any_signal() 方法
- types.rs: ProfileSignals 无 structural 变化(字段已存在)

测试: 4 个新测试(parse_combined_response_full/minimal/invalid +
extract_combined_fallback),11 个 extractor 测试全部通过
2026-04-18 22:56:42 +08:00
iven
cb727fdcc7 fix(growth): 二次审计修复 — 6项 CRITICAL/HIGH/MEDIUM 全部修复
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
CRITICAL-1/2: json_utils 花括号匹配改为括号平衡算法
  - 处理字符串字面量中的花括号和转义引号
  - 新增 5 个测试(平衡匹配、字符串内花括号、转义引号、extract_string_array)

HIGH-4: EvolutionMiddleware 只取第一个事件(remove(0)),不丢弃后续
HIGH-5: EvolutionMiddleware 先 read() 判空再 write(),减少锁竞争
HIGH-7: from_experience_store 使用传入 store 的 viking 实例(不再忽略参数)
  - ExperienceStore 新增 viking() getter

MEDIUM-9: skill_generator + workflow_composer JSON 数组解析去重
  - 新增 json_utils::extract_string_array() 共享函数
MEDIUM-14: EvolutionMiddleware 注入文本去除多余缩进空格

测试: zclaw-growth 133 tests, zclaw-runtime 87 tests, workspace 0 failures
2026-04-18 22:30:10 +08:00
iven
a9ea9d8691 fix(growth): Evolution Engine 审计修复 — 7项全部完成
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
HIGH-1: 提取共享 json_utils.rs,skill_generator/workflow_composer 去重
HIGH-2: FeedbackCollector Vec→HashMap,消除 unwrap() panic 风险
HIGH-3: ProfileUpdater 改为 collect_updates() 返回字段列表,
        growth.rs 直接 async 调用 update_field(),不再用 no-op 闭包
MEDIUM-1: EvolutionMiddleware 注入后自动 drain,防止重复注入
MEDIUM-2: PatternAggregator tools 提取改为直接收集 context 值
MEDIUM-3: evolution_engine.rs 移除 4 个未使用 imports
MEDIUM-4: workflow_composer parse_response pattern 参数加下划线
MEDIUM-7: SkillCandidate 添加 version 字段(默认=1)

测试: zclaw-growth 128 tests, zclaw-runtime 86 tests, workspace 0 failures
2026-04-18 22:15:43 +08:00
iven
f97e6fdbb6 feat: Evolution Engine Phase 3-5 — WorkflowComposer+FeedbackCollector+EvolutionMiddleware+反馈闭环
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Phase 3:
- EvolutionMiddleware (priority 78): 管家对话中注入进化确认提示
- GrowthIntegration.check_evolution() API 串入

Phase 4:
- WorkflowComposer: 轨迹工具链模式聚类 + Pipeline YAML prompt 构建 + JSON 解析
- EvolutionEngine.analyze_trajectory_patterns() L3 入口

Phase 5:
- FeedbackCollector: 反馈信号收集 + 信任度管理 + 推荐(Optimize/Archive/Promote)
- EvolutionEngine 反馈闭环方法: submit_feedback/get_artifacts_needing_optimization

新增 12 个测试(111→123),全 workspace 701 测试通过。
2026-04-18 21:27:59 +08:00
iven
7d03e6a90c feat(runtime): GrowthIntegration 串入 EvolutionEngine — L2 触发检查 API 2026-04-18 21:17:48 +08:00
iven
415abf9e66 feat(growth): L2 技能进化核心 — PatternAggregator+SkillGenerator+QualityGate+EvolutionEngine
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- PatternAggregator: 经验模式聚合,找出 reuse_count>=threshold 的可固化模式
- SkillGenerator: LLM prompt 构建 + JSON 解析 + 自动提取 JSON 块
- QualityGate: 置信度/冲突/格式质量门控
- EvolutionEngine: 中枢调度器,协调 L2 触发检查+技能生成+质量验证

新增 24 个测试(87→111),全 workspace 0 error。
2026-04-18 21:09:48 +08:00
iven
8d218e9ab9 feat(runtime): GrowthIntegration 串入 ExperienceExtractor + ProfileUpdater
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-18 21:01:04 +08:00
iven
e2d44ecf52 feat(growth): ExperienceExtractor + ProfileUpdater — 结构化经验提取和画像增量更新 2026-04-18 20:51:17 +08:00
iven
8ec6ca5990 feat(growth): 扩展 LlmDriverForExtraction — 新增 extract_combined_all 默认实现 2026-04-18 20:48:09 +08:00
iven
7e8eb64c4a feat(growth): 新增 Evolution Engine 核心类型 — ExperienceCandidate/CombinedExtraction/EvolutionEvent 2026-04-18 20:47:30 +08:00
iven
e88c51fd85 docs(wiki): 发布前审计数值校准 — TRUTH/CLAUDE/wiki 三端同步
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
TRUTH.md:
- #[test] 433→425, #[tokio::test] 368→309 (2026-04-18 验证)
- Zustand Store 21→26, Admin V2 页面 15→17
- Pipeline YAML 17→18
- Hands 启用 9→7 (6 HAND.toml + _reminder),Whiteboard/Slideshow/Speech 标注开发中

CLAUDE.md §6:
- Hands 12 个能力包 (7 注册 + 3 开发中 + 2 禁用)
- §13 架构快照同步

wiki/index.md:
- 关键数字同步更新
2026-04-18 14:09:47 +08:00
iven
e10549a1b9 fix: 发布前审计 Batch 2 — Debug遮蔽 + unwrap + 静默吞错 + MCP锁 + 索引 + Config验证
安全:
- LlmConfig 自定义 Debug impl,api_key 显示为 "***REDACTED***"
- tsconfig.json 移除 ErrorBoundary.tsx 排除项(安全关键组件)
- billing/handlers.rs Response builder unwrap → map_err 错误传播
- classroom_commands/mod.rs db_path.parent().unwrap() → ok_or_else

静默吞错:
- approvals.rs 3处 warn→error(审批状态丢失是严重事件)
- events.rs publish() 添加 Event dropped debug 日志
- mcp_transport.rs eprintln→tracing::warn (僵尸进程风险)
- zclaw-growth sqlite.rs 4处迁移:区分 duplicate column name 与真实错误

MCP Transport:
- 合并 stdin+stdout 为单一 Mutex<TransportHandles>
- send_request write-then-read 原子化,防止并发响应错配

数据库:
- 新迁移 20260418000001: idx_rle_created_at + idx_billing_sub_plan + idx_ki_created_by

配置验证:
- SaaSConfig::load() 添加 jwt_expiration_hours>=1, max_connections>0, min<=max
2026-04-18 14:09:36 +08:00
iven
f3fb5340b5 fix: 发布前审计 Batch 1 — Pipeline 内存泄漏/超时 + Director 死锁 + Rate Limit Worker
Pipeline executor:
- 添加 cleanup() 方法,MAX_COMPLETED_RUNS=100 上限淘汰旧记录
- 每步执行添加 tokio::time::timeout(使用 PipelineSpec.timeout_secs,默认 300s)
- Delay ms 上限 60000,超出 warn 并截断

Director send_to_agent:
- 重构为 oneshot::channel 响应模式,避免 inbox + pending_requests 锁竞争
- 添加 ensure_inbox_reader() 独立任务分发响应到对应 oneshot sender

cleanup_rate_limit Worker:
- 实现 Worker body: DELETE FROM rate_limit_events WHERE created_at < NOW() - INTERVAL '1 hour'

651 tests passed, 0 failed
2026-04-18 14:09:16 +08:00
iven
35a11504d7 docs(wiki): 审计后续修复记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-18 09:24:25 +08:00
iven
450569dc88 fix: 审计后续 3 项修复 — 残留清理 + FTS5 CJK + HTTP 大小限制
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
1. Shell Hands 残留清理 (3处):
   - message.rs: 移除过时的 zclaw_hands::slideshow 注释
   - user_profiler.rs: slideshow 偏好改为 RecentTopic
   - handStore.test.ts: 移除 speech mock 数据 (3→2)

2. zclaw-growth FTS5 CJK 查询修复:
   - sanitize_fts_query CJK 路径从精确短语改为 token OR 组合
   - "Rust 编程" → "rust" OR "编程" (之前是 "rust 编程" 精确匹配)
   - 修复 test_memory_lifecycle + test_semantic_search_ranking

3. WASM HTTP 响应大小限制:
   - Content-Length 预检 + 读取后截断 (1MB 上限)
   - read_to_string 改为显式错误处理

651 测试全通过,0 失败。
2026-04-18 09:23:58 +08:00
iven
3a24455401 docs(wiki): 深度审计修复记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-18 09:11:37 +08:00
iven
4e4eefdde1 fix: 深度审计修复 — WASM 安全加固 + A2A 编译路径 + 测试编译
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
CRITICAL:
- zclaw_file_read: 路径遍历修复 — 组件级过滤替代前缀检查
- zclaw_http_fetch: SSRF 防护 — URL scheme 白名单 + 私有IP段阻止
- Phase 4A: 移除 zclaw-protocols a2a feature gate, A2A 始终编译
- 移除 kernel/desktop multi-agent feature (不再控制任何代码)

MEDIUM:
- user_profiler: FactCategory cfg(test) 导入修复 (563 测试全通过)
2026-04-18 09:11:15 +08:00
iven
0522f2bf95 docs: CLAUDE.md 架构快照更新 — 记忆管道 E2E + 最近变更排序
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-18 08:18:39 +08:00
iven
04f70c797d docs(wiki): Phase 4A/4B 记录 — multi-agent gate 移除 + WASM host 函数
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-18 08:18:28 +08:00
iven
a685e97b17 feat(skills): WASM host 函数真实实现 — zclaw_log/http_fetch/file_read (Phase 4B)
替换 stub 为真实实现:
- zclaw_log: 读取 guest 内存并 log
- zclaw_http_fetch: ureq v3 同步 GET (10s timeout, network_allowed 守卫)
- zclaw_file_read: 沙箱 /workspace 目录读取 (路径校验防逃逸)
添加 ureq v3 workspace 依赖, 25 测试全通过。
2026-04-18 08:18:08 +08:00
iven
2037809196 refactor(kernel): 移除 multi-agent feature gate — 33处 cfg 全部删除 (Phase 4A)
8 个文件移除 #[cfg(feature = "multi-agent")],zclaw-kernel default features
新增 multi-agent。A2A 路由、agents、adapters 现在始终编译。
2026-04-18 08:17:58 +08:00
iven
eaa99a20db feat(ui): Feature Gates 设置页 — 实验性功能开关 (Phase 3B)
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
新增 Settings > 实验性功能 页面:
- 3 个开关: 多 Agent 协作 / WASM 技能沙箱 / 详细工具输出
- localStorage 持久化 + isFeatureEnabled() 公共 API
- 实验性功能警告横幅
- 当前为前端运行时开关,未来可对接 Kernel config
2026-04-18 08:05:06 +08:00
iven
a38e91935f docs(wiki): Phase 3A loop_runner 双路径合并记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-17 21:56:34 +08:00
iven
5687dc20e0 refactor(runtime): loop_runner 双路径合并 — 统一走 middleware chain (Phase 3A)
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
middleware_chain 从 Option<MiddlewareChain> 改为 MiddlewareChain:
- 移除 6 处 use_middleware 分支 + 2 处 legacy loop_guard inline path
- 移除 loop_guard field + Mutex import + circuit_breaker_triggered 变量
- 空 chain (Default) 行为等价于 middleware path 中的 no-op
- 1154行 → 1023行,净减 131 行
- cargo check --workspace ✓ | cargo test ✓ (排除 desktop 预存编译问题)
2026-04-17 21:56:10 +08:00
iven
21c3222ad5 docs(wiki): Phase 2A Pipeline 解耦记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-17 20:10:34 +08:00
iven
5381e316f0 refactor(pipeline): 移除空的 zclaw-kernel 依赖 (Phase 2A)
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Pipeline 代码中无任何 zclaw_kernel 引用,依赖声明是遗留物。
移除后编译验证通过: cargo check --workspace --exclude zclaw-saas ✓
2026-04-17 20:10:21 +08:00
iven
96294d5b87 docs(wiki): Phase 2B saasStore 拆分记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-17 20:05:57 +08:00
iven
e3b6003be2 refactor(store): saasStore 拆分为子模块 (Phase 2B)
1025行单文件 → 5个文件 + barrel re-export:
- saas/types.ts (103行) — 类型定义
- saas/shared.ts (93行) — Device ID、常量、recovery probe
- saas/auth.ts (362行) — 登录/注册/登出/恢复/TOTP
- saas/billing.ts (84行) — 计划/订阅/支付
- saas/index.ts (309行) — Store 组装 + 连接/模板/配置
- saasStore.ts (15行) — re-export barrel(外部零改动)

所有 25+ 消费者 import 路径不变,`tsc --noEmit` ✓
2026-04-17 20:05:43 +08:00
iven
f9f5472d99 docs(wiki): Phase 5 移除空壳 Hand 记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-17 19:56:32 +08:00
iven
cb9e48f11d refactor(hands): 移除空壳 Hand — Whiteboard/Slideshow/Speech (Phase 5)
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
删除 3 个仅含 UI 占位的 Hand,清理 Rust 实现与前端引用:
- Rust: whiteboard.rs(422行) + slideshow.rs(797行) + speech.rs(442行)
- 前端: WhiteboardCanvas + SlideshowRenderer + speech-synth + 相关类型/常量
- 配置: 3 个 HAND.toml
- 净减 ~5400 行,Hands 9→6(启用) + Quiz/Browser/Researcher/Collector/Clip/Twitter/Reminder
2026-04-17 19:55:59 +08:00
iven
14fa7e150a docs(wiki): Phase 1 错误体系重构记录
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-17 19:38:47 +08:00
iven
f9290ea683 feat(types): 错误体系重构 — ErrorKind + error code + Serialize
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Rust (crates/zclaw-types/src/error.rs):
- 新增 ErrorKind enum (17 种) + Serde Serialize/Deserialize
- 新增 error_codes 模块 (稳定错误码 E4040-E5110)
- ZclawError 新增 kind() / code() 方法
- 新增 ErrorDetail struct + Serialize impl
- 保留所有现有变体和构造器 (零破坏性)
- 新增 12 个测试: kind 映射 + code 稳定性 + JSON 序列化

TypeScript (desktop/src/lib/error-types.ts):
- 新增 RustErrorKind / RustErrorDetail 类型定义
- 新增 tryParseRustError() 结构化错误解析
- 新增 classifyRustError() 按 ErrorKind 分类
- classifyError() 优先解析结构化错误,fallback 字符串匹配
- 17 种 ErrorKind → 中文标题映射

验证: cargo check ✓ | tsc ✓ | 62 zclaw-types tests ✓
2026-04-17 19:38:19 +08:00
iven
0754ea19c2 docs(wiki): Phase 0 修复记录 — 流式事件/CI/中文化
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-17 18:13:43 +08:00
iven
2cae822775 fix: Phase 0 阻碍项修复 — 流式事件错误处理 + CI 排除 + UI 中文化
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
BLK-2: loop_runner.rs 22 处 let _ = tx.send() 全部替换为
if let Err(e) { tracing::warn!(...) },修复流式事件静默丢失问题

BLK-5: 50+ 英文字符串翻译为中文
- HandApprovalModal.tsx (~40处): 风险标签/按钮/状态/表单标签
- ChatArea.tsx: Thinking.../Sending...
- AuditLogsPanel.tsx: 空状态文案
- HandParamsForm.tsx: 空列表提示
- CreateTriggerModal.tsx: 成功提示
- MessageSearch.tsx: 时间筛选/搜索历史

BLK-6: CI/Release workflow 添加 --exclude zclaw-saas
- ci.yml: clippy/test/build 三个步骤
- release.yml: test 步骤

验证: cargo check ✓ | tsc --noEmit ✓
2026-04-17 18:12:42 +08:00
iven
93df380ca8 docs(wiki): BUG-M4/L1 已修复 + wiki 数字更新
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- BUG-M4 标记为已修复 (admin_guard_middleware)
- BUG-L1 标记为已验证修复 (代码已统一为 pain_seed_categories)
- E2E 04-17 MEDIUM/LOW 全部关闭
- butler.md/log.md: pain_seeds → pain_seed_categories
2026-04-17 11:46:04 +08:00
iven
90340725a4 fix(saas): admin_guard_middleware — 非 admin 用户统一返回 403
BUG-M4 修复: 之前非 admin 用户发送 malformed body 到 admin 端点时,
Axum 先反序列化 body 返回 422,绕过了权限检查。

- 新增 admin_guard_middleware (auth/mod.rs) 在中间件层拦截
- account::admin_routes() 拆分 (dashboard 独立)
- billing::admin_routes() + account::admin_routes() 加 guard layer
- 非 admin 用户无论 body 是否合法,统一返回 403
2026-04-17 11:45:55 +08:00
iven
b2758d34e9 docs(wiki): 添加 04-17 回归验证记录 — 13/13 PASS
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- Phase 1: 6 项 bug 修复回归全部 PASS (H1/H2/M1/M2/M3/M5)
- Phase 2: Pipeline + Skill 子系统链路全部 PASS
- Phase 3: Butler + 记忆联动全部 PASS
- BUG-L2 Pipeline 反序列化已验证修复
- 记忆系统 381 条记忆, 12 agent 隔离正常
2026-04-17 10:45:49 +08:00
iven
a504a40395 fix: 7 项 E2E Bug 修复 — Dashboard 404 / 记忆去重 / 记忆注入 / invoice_id / Prompt 版本
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P0:
- BUG-H1: Dashboard 路由 /api/v1/stats/dashboard → /api/v1/admin/dashboard

P1:
- BUG-H2: viking_add 预检查 content_hash 去重,返回 "deduped" 状态;SqliteStorage 启动时回填已有条目 content_hash
- BUG-M5: saas-relay-client 发送前调用 viking_inject_prompt 注入跨会话记忆

P2:
- BUG-M1: PaymentResult 添加 invoice_id 字段,query_payment_status 返回 invoice_id
- BUG-M2: UpdatePromptRequest 添加内容字段,更新时自动创建新版本并递增 current_version
- BUG-M3: viking_find scope 参数文档化(设计行为,调用方需传 agent scope)
- BUG-M4: Dashboard 路由缺失已修复,handler 层 require_admin 已正确返回 403

P3 (确认已修复/非代码问题):
- BUG-L1: pain_seed_categories 已统一,无 pain_seeds 残留
- BUG-L2: pipeline_create 参数格式正确,E2E 测试方法问题
2026-04-17 03:31:06 +08:00
iven
1309101a94 fix(ui): Agent 面板信息不随对话更新 — 事件时序 + clones 刷新
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- streamStore: zclaw:agent-profile-updated 事件从记忆提取前改为 .then() 后触发
- RightPanel: profile 更新事件中新增 loadClones() 刷新 selectedClone 数据
2026-04-16 22:57:32 +08:00
iven
0d79993691 fix(saas): 3 项 P0 安全/功能修复 + TRUTH.md 数字校准
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P0-01: Admin ApiKeys 创建功能前后端不匹配
- 前端 service 从 /keys 改回 /tokens(api_tokens 表)
- 前端 UI 字段 {name, expires_days, permissions} 与旧路由匹配

P0-02: 账户锁定检查错误处理
- unwrap_or(false) 改为 map_err + SaasError 传播
- SQL 查询失败时返回错误而非静默跳过锁定检查

P0-03: Logout refresh token 撤销增强
- 新增 access token cookie fallback 提取 account_id
- Tauri 桌面端 Bearer auth 场景下也能撤销 refresh token

TRUTH.md 校准: Tauri 183→190, invoke 95→104, .route() 136→137, 中间件 15→14
2026-04-16 22:22:12 +08:00
iven
a0d1392371 fix(ui): 5 项 E2E 测试 Bug 修复 — Agent 502 / 错误持久化 / 模型标记 / 侧面板 / 记忆页
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- BUG-01: createFromTemplate 在 saas-relay 模式下 try-catch 跳过本地 Kernel
- BUG-02: upsertActiveConversation 持久化前剥离 error/streaming/optimistic 字段
- BUG-04: ModelSelector 添加 available 标记,ChatArea 追踪失败模型 ID
- BUG-05: VikingPanel 移除 status?.available 门控,不可用时 disabled + 重连按钮
- BUG-06: 侧面板 tooltip 改为"查看产物文件",空状态增加图标和说明
2026-04-16 19:12:21 +08:00
iven
7db9eb29a0 fix(butler): useButlerInsights 使用 resolvedAgentId 查询痛点/方案
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
审计发现 useButlerInsights 仍使用原始 agentId("1")查询痛点,
而痛点按 kernel UUID 存储导致空结果。改用 effectiveAgentId
(resolvedAgentId ?? agentId)确保查询路径一致。

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 17:29:16 +08:00
iven
1e65b56a0f fix(identity): 3 项根因级修复 — Agent ID 映射 + user_profile 读取 + 用户画像 fallback
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Issue 2: IdentityFile 枚举补全 UserProfile 变体
- get_file()/propose_change()/approve_proposal() 补全 match arm
- identity_get_file/identity_propose_change Tauri 命令支持 user_profile

Issue 1: Agent ID 映射机制
- 新增 resolveKernelAgentId() 工具函数 (带缓存)
- ButlerPanel 使用 kernel UUID 替代 SaaS relay "1" 查询 VikingStorage

Issue 3: 用户画像 fallback 注入
- build_system_prompt 改为 async,identity user_profile 为默认值时
  从 VikingStorage preferences 路径查询最近 5 条记忆作为 fallback
- intelligence_hooks 调用处同步加 .await

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 17:07:38 +08:00
iven
3c01754c40 fix(agent): 12 项 agent 对话链路全栈修复
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
深端到端验证发现 12 个问题,6 Phase 全栈修复:

Phase 5 — 快速 UX 修复:
- #9: SimpleSidebar 添加新对话按钮 (SquarePen + useChatStore)
- #5: 模型列表 JOIN provider_keys 过滤无 API Key 的模型
- #11: AgentOnboardingWizard 焦点领域增加 4 行业选项
  (医疗健康/教育培训/金融财务/法律合规)

Phase 1 — ButlerPanel 记忆修复:
- #2a: MemorySection URI 从 viking://agent/.../memories/ 修正为 agent://.../
- #2b: "立即分析对话"按钮现在触发 extractAndStoreMemories

Phase 2 — FTS5 中文分词:
- #4: FTS5 tokenizer 从 unicode61 切换到 trigram,原生支持 CJK
- 自动迁移:检测旧 unicode61 表并重建索引
- sanitize_fts_query 支持中文引号短语查询

Phase 3 — 跨会话身份持久化:
- #6-8: 重新启用 USER.md 注入系统提示词 (截断前 10 行)

Phase 4 — Agent 面板同步:
- #1,#10: listClones 从 4 字段扩展到完整映射
  (soul/userProfile 解析 nickname/emoji/userName/userRole)
- updateClone 通过 identity 系统同步 nickname→SOUL.md
  和 userName/userRole→USER.md

Phase 6 — Agent 创建容错:
- #12: createFromTemplate 增加 SaaS 不可用 fallback

验证: tsc --noEmit  cargo check 
2026-04-16 09:21:46 +08:00
iven
08af78aa83 docs: 2026-04-16 变更记录 — 参数名修复 + 解密自愈 + 设置清理
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- known-issues.md: 新增 3 条修复记录 (Heartbeat参数/Relay解密/设置清理)
- log.md: 追加 2026-04-16 变更日志
2026-04-16 08:06:02 +08:00
iven
b69dc6115d fix(relay): API Key 解密失败自愈 — 启动迁移 + 容错跳过
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
根因: select_best_key 遇到解密失败时直接 500 返回,
不会尝试下一个 key。如果 DB 中有旧的加密格式 key,
整个 relay 请求被阻断。

修复:
- key_pool: 解密失败时 warn + skip 到下一个 key,不再 500
- key_pool: 新增 heal_provider_keys() 启动自愈迁移
  - 逐个尝试解密所有加密 key
  - 解密成功 → 用当前密钥重新加密(幂等)
  - 解密失败 → 标记 is_active=false + warn
- main.rs: 启动时调用自愈迁移(在 TOTP 迁移之后)
2026-04-16 02:40:44 +08:00
iven
7dea456fda chore(settings): 删除用量统计和积分详情页面 — 与订阅计费重复
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
UsageStats 和 Credits 功能已被 PricingPage (订阅与计费) 覆盖,
移除冗余页面简化设置导航。
2026-04-16 02:07:39 +08:00
iven
f6c5dd21ce fix(heartbeat): Tauri invoke 参数名修正 snake_case → camelCase
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Tauri 2.x 默认将 Rust snake_case 参数重命名为 camelCase,
前端 invoke 必须使用 camelCase (agentId 而非 agent_id)。

修复 3 处 invoke 调用:
- heartbeat_update_memory_stats (agentId, taskCount, totalEntries, storageSizeBytes)
- heartbeat_record_correction (agentId, correctionType)
- heartbeat_record_interaction (agentId)
2026-04-16 00:03:57 +08:00
iven
47250a3b70 docs: Heartbeat 统一健康系统文档同步 — TRUTH + wiki + CLAUDE.md §13
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- TRUTH.md: Tauri 182→183, React 104→105, lib 85→76
- wiki/index.md: 同步关键数字
- wiki/log.md: 追加 2026-04-15 Heartbeat 变更记录
- CLAUDE.md §13: 更新架构快照 + 最近变更
2026-04-15 23:22:43 +08:00
iven
215c079d29 fix(intelligence): Heartbeat 统一健康系统 — 6处断链修复 + 健康面板 + SaaS自动恢复
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Rust 后端 (heartbeat.rs):
- 告警实时推送: OnceLock<AppHandle> + Tauri emit heartbeat:alert
- 动态间隔: tokio::select! + Notify 替代不可变 interval
- Config 持久化: update_config 写入 VikingStorage
- heartbeat_init 从 VikingStorage 恢复 config
- 移除 dead code (subscribe, HeartbeatCheckFn)
- Memory stats fallback 分层处理

新增 health_snapshot.rs:
- HealthSnapshot Tauri 命令 — 按需查询引擎/记忆状态
- 注册到 lib.rs invoke_handler

前端修复:
- HeartbeatConfig handleSave 同步到 Rust 后端
- App.tsx 读 localStorage 持久化配置 + heartbeat:alert 监听 + toast
- saasStore 降级后指数退避探测恢复 + saas-recovered 事件
- 新增 HealthPanel.tsx 只读健康面板 (4卡片 + 告警列表)
- SettingsLayout 添加 health 导航入口

清理:
- 删除 intelligence-client/ 目录版 (9文件 -1640行, 单文件版是活跃代码)
2026-04-15 23:19:24 +08:00
iven
043824c722 perf(runtime): nl_schedule 正则预编译 — 9个 LazyLock 静态替代每次调用编译
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
将 parse_nl_schedule 中 9 个 Regex::new() 从函数内每次调用编译
提升为 std::sync::LazyLock<Regex> 静态变量,首次调用时编译一次,
后续调用直接复用。16 个单元测试全部通过。
2026-04-15 13:34:27 +08:00
iven
bd12bdb62b fix(chat): 定时功能审计修复 — 消除重复解析 + ID碰撞 + 输入补全
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
审计发现修复:
- H-01: 存储 ParsedSchedule 避免重复 parse_nl_schedule 调用
- H-03: trigger ID 追加 UUID 片段防止高并发碰撞
- C-02: execute_trigger 验证错误信息明确系统 Hand 必须注册
- M-02: SchedulerService 传递 trigger_name 作为 task_description
- M-01: 添加拦截路径跳过 post_hook 的设计注释
2026-04-15 10:02:49 +08:00
iven
28c892fd31 fix(chat): 聊天定时功能断链接通 — NlScheduleParser + _reminder Hand
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
接通"写了没接"的定时功能断链:
- NlScheduleParser has_schedule_intent/parse_nl_schedule 接入 agent_chat_stream
- 新增 _reminder 系统 Hand 作为定时触发器桥接
- TriggerManager hand_id 验证对 _ 前缀系统 Hand 放行
- 聊天消息含定时意图时自动拦截,创建触发器并返回确认消息

验证:cargo check 0 error, 49 tests passed,
Tauri MCP "每天早上9点提醒我查房" → cron 0 9 * * * 确认正确显示
2026-04-15 09:45:19 +08:00
iven
9715f542b6 docs: 发布前冲刺 Day1 文档同步 — TRUTH.md + wiki 数字更新
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- TRUTH.md: Tauri 182命令、95 invoke、89 @reserved、0孤儿、0 Cargo warnings
- wiki/log.md: 追加 Day1 冲刺记录 (5项修复 + 2项标注)
- wiki/index.md: 更新关键数字与验证日期
2026-04-15 02:07:54 +08:00
iven
5121a3c599 chore(desktop): Tauri 命令 @reserved 全量标注 — 88个无前端调用命令已标注
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- 新增 66 个 @reserved 标注 (已有 22 个)
- 覆盖: agent/butler/classroom/hand/mcp/pipeline/skill/trigger/viking/zclaw 等模块
- MCP 命令增加 @connected 注释说明前端接入路径
- @reserved 总数: 89 (含 identity_init)
2026-04-15 02:05:58 +08:00
iven
ee1c9ef3ea chore: Cargo warnings 清零 — 39→0 (仅剩 sqlx-postgres 外部依赖警告)
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- runtime: 移除未使用的 SessionId/Datelike import,修复 unused variable
- intelligence: 模块级 #![allow(dead_code)] 抑制 Hermes 预留代码警告
- mcp.rs/persist.rs/nl_schedule.rs: 标注 #[allow(dead_code)] 保留接口
2026-04-15 01:53:11 +08:00
iven
76d36f62a6 fix(desktop): 模型自动路由 — 首次登录自动选择可用模型
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- saasStore: fetchAvailableModels 处理 currentModel 为空的情况,自动选择第一个可用模型
- connectionStore: SaaS relay 连接成功后同步 currentModel 到 conversationStore
- 同时覆盖 Tauri 和浏览器两条 SaaS relay 路径
- 修复首次登录用户需手动选模型的问题
2026-04-15 01:45:36 +08:00
iven
be2a136392 fix(saas): relay_tasks 超时自动清理 — 每5分钟扫描 processing >10min 标记 failed
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- scheduler.rs: 新增 start_db_cleanup_tasks 中的 relay 超时清理定时任务
- status=processing 且 updated_at 超过 10 分钟的 relay_task 自动标记为 failed
- 避免 Provider key 禁用后 relay_task 永久停留在 processing 状态
2026-04-15 01:41:50 +08:00
iven
76cdfd0c00 fix(saas): SSE 用量统计一致性修复 — 回写 usage_records + 消除 relay_requests 双重计数
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- service.rs: SSE 流结束后回写 usage_records 真实 token (status=success)
- service.rs: spawned task 中调用 increment_usage 统一递增 tokens + relay_requests
- handlers.rs: 移除 SSE 路径的 increment_dimension("relay_requests") 消除双重计数
- 从 request_body 提取 model_id 用于 usage_records 精准归因
2026-04-15 01:40:27 +08:00
iven
02a4ba5e75 fix(desktop): 替换 require() 为 ES import — 修复生产构建崩溃
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- connectionStore: 2 处 require() → loadConversationStore() 异步预加载 + 闭包引用
- saasStore: 1 处 require() → await import()(logout 是 async)
- llm-service: 1 处 require() → 顶层 import(无循环依赖)
- streamStore: 移除重复动态导入,统一使用顶层 useConnectionStore
- tsc --noEmit 0 errors
2026-04-15 00:47:29 +08:00
iven
a8a0751005 docs: wiki 三端联调V2结果 + 调试环境信息
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- known-issues: 新增V2联调测试(17项通过 + 3项待处理 + SSE token修复)
- development: 新增完整调试环境文档(Windows/PostgreSQL/端口/账号/启动顺序)
- log: 追加V2联调记录
2026-04-15 00:40:05 +08:00
iven
9c59e6e82a fix(saas): SSE relay token capture 修复 — stream_done 标志 + 前缀兼容
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- SseUsageCapture 增加 stream_done 标志,[DONE] 和 stream 结束时设置
- parse_sse_line 兼容 "data:" 和 "data: " 两种前缀
- 增加 total_tokens 兜底解析(某些 provider 不返回 prompt_tokens)
- 轮询逻辑优先检测 stream_done,而非依赖 total > 0 条件
- 超时时增加 warn 日志记录实际 token 值

根因: 上游 provider 不在 SSE chunk 中返回 usage 时,轮询稳定逻辑
(total > 0 条件) 永远不满足,导致 token 始终为 0。
2026-04-15 00:15:03 +08:00
iven
27b98cae6f docs: wiki 全量更新 — 2026-04-14 代码验证驱动
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
关键数字修正:
- Rust 77K行(274 .rs)、Tauri 189命令、SaaS 137 routes
- Admin V2 17页、SaaS 16模块(含industry)、@reserved 22
- SQL 20迁移/42表、TODO/FIXME 4个、dead_code 16

内容更新:
- known-issues: V13-GAP 全部标记已修复 + 三端联调测试结果
- middleware: 14层 runtime + 10层 SaaS HTTP 完整清单
- saas: industry模块、路由模块13个、数据表42个
- routing: Store含industryStore、21个Store文件
- butler: 行业配置接入ButlerPanel、4内置行业
- log: 三端联调+V13修复记录追加
2026-04-14 22:15:53 +08:00
iven
d0aabf5f2e fix(test): pain_severity 测试断言修正 + 调试文档代码验证更新
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- test_severity_ordering: 修正错误断言 — 2条挫折信号应触发High而非Medium
- DEBUGGING_PROMPT.md: 全量代码验证更新
  - 数字修正: 97组件/81lib/189命令/137路由/8 Worker
  - V13-GAP 状态更新: 5/6 已修复, 1 标注 DEPRECATED
  - 中间件优先级修正: ButlerRouter@80, DataMasking@90
  - SaaS Relay: resolve_model() 三级解析 (非精确匹配)
2026-04-14 22:03:51 +08:00
iven
3c42e0d692 docs: 三端联调测试报告 V2 — P1 修复状态更新 + 测试截图
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
30+ API/16 Admin/8 Tauri 全量测试,3 P1 已修复
2026-04-14 22:02:27 +08:00
iven
e0eb7173c5 fix: 三端联调 P1 修复 — API密钥页崩溃 + 桌面端401恢复 + 用量统计全零
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P1-03: vite.config.ts proxy '/api' → '/api/' 加尾部斜杠,
  防止前缀匹配 /api-keys 导致 SPA 路由崩溃

P1-01: kernel_init 增加 api_key 变更检测(token 刷新后自动重连),
  streamStore 增加 401 自动恢复(refresh token → kernel reconnect),
  KernelClient 新增 getConfig() 方法

P1-02: /api/v1/usage 总计改从 billing_usage_quotas 读取
  (authoritative source,SSE 和 JSON 均写入),
  by_model/by_day 仍从 usage_records 读取
2026-04-14 22:02:02 +08:00
iven
6721a1cc6e fix(admin): 行业选择500修复 + 管理员切换订阅计划
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- fix(industry): list_industries SQL参数编号错位 — count查询和items查询
  共用WHERE子句但参数从$3开始,sqlx bind按$1/$2顺序绑定导致500
- feat(billing): 新增 PUT /admin/accounts/:id/subscription 端点 (super_admin)
  验证目标计划 → 取消当前订阅 → 创建新订阅(30天) → 同步配额
- feat(admin-v2): Accounts.tsx 编辑弹窗新增「订阅计划」选择区
  显示所有活跃计划,保存时调用admin switch plan API
2026-04-14 19:06:58 +08:00
iven
d2a0c8efc0 fix(saas): 启动崩溃修复 — config_items 约束 + industry 类型匹配
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- db.rs: config_items INSERT ON CONFLICT (id) → (category, key_path) 匹配实际唯一约束
- db.rs: fix_seed_data category 重命名前先删除冲突行,避免唯一约束冲突
- migration/service.rs: seed_default_config_items + sync push INSERT 同步修复 ON CONFLICT
- industry/types.rs: keywords_count i64→i32 匹配 PostgreSQL INT4 列类型

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 18:35:24 +08:00
iven
70229119be docs: 三端联调测试报告 2026-04-14 — 30+ API/16 Admin/8 Tauri 全量测试
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
2026-04-14 17:48:31 +08:00
iven
dd854479eb fix: 三端联调测试 2 P1 + 2 P2 + 4 P3 修复
P1-07: billing get_or_create_usage 同步 max_* 列到当前计划限额
P1-08: relay handler 增加直接配额检查 (relay_requests/input/output_tokens)
P2-09: relay failover 成功后记录 tokens 并标记 completed
P2-10: Tauri agentStore saas-relay 模式下从 SaaS API 获取真实用量
P2-14: super_admin 合成 subscription + check_quota 放行
P3-19: 新建 ApiKeys.tsx 页面替代 ModelServices 路由
P3-15: antd destroyOnClose → destroyOnHidden (3处)
P3-16: ProTable onSearch → onSubmit (2处)
2026-04-14 17:48:22 +08:00
iven
45fd9fee7b fix(desktop): P0-1 验证 SaaS 模型选择 — 防止残留模型 ID 导致请求失败
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Tauri 桌面端通过 SaaS Token Pool 中转访问 LLM,模型列表由 SaaS 后端
动态提供。之前的实现直接使用 conversationStore 持久化的 currentModel,
可能在切换连接模式后使用过期的模型 ID,导致 relay 请求失败。

修复:
- Tauri 路径:用 SaaS relayModels 的 id+alias 构建 validModelIds 集合,
  preferredModel 仅在集合内时才使用,否则回退到第一个可用模型
- 浏览器路径:同样验证 currentModel 在 SaaS 模型列表中才使用

后端 cache.resolve_model() 别名解析作为二道防线保留。
2026-04-14 07:08:56 +08:00
iven
4c3136890b fix: 三端联调测试 2 P0 + 6 P1 + 2 P2 修复
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
P0-1: SaaS relay 模型别名解析 — "glm-4-flash" → "glm-4-flash-250414" (resolve_model)
P0-2: config.rs interpolate_env_vars UTF-8 修复 (chars 迭代器替代 bytes as char)
      + DB 启动编码检查 + docker-compose UTF-8 编码参数

P1-3: UI 模型选择器覆盖 Agent 默认模型 (model_override 全链路: TS→Tauri→Rust kernel)
P1-6: 知识搜索管道修复 — seed_knowledge 创建 chunks + 默认分类 (seed/uploaded/distillation)
P1-7: 用量限额从当前 Plan 读取 (非 stale usage 表)
P1-8: relay 双维度配额检查 (relay_requests + input_tokens)

P2-9: SSE 路径 token 计数修复 — 流结束检测替代固定 500ms sleep + billing increment
2026-04-14 00:17:08 +08:00
iven
0903a0d652 fix(v13): FIX-06 PersistentMemoryStore 全量移除 — 665行死代码清理
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
- persistent.rs 611→57行: 移除 PersistentMemoryStore struct + 全部方法 + 死 embedding global
- memory_commands.rs: MemoryStoreState→Arc<Mutex<()>>, memory_init→no-op, 移除 2 @reserved 命令
- viking_commands.rs: 移除冗余 PersistentMemoryStore embedding 配置段
- lib.rs: Tauri 命令 191→189 (移除 memory_configure_embedding + memory_is_embedding_configured)
- TRUTH.md + wiki/log.md 数字同步

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 20:58:54 +08:00
iven
fd3e7fd2cb docs: V13 审计修复文档同步 — 6项状态更新 + 中间件14→15层
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
AUDIT_TRACKER: V13-GAP-01~05 FIXED, GAP-06 PARTIALLY_FIXED
wiki/middleware: 15层 (TrajectoryRecorder V13注册)
wiki/log: 2026-04-13 变更记录
CLAUDE.md: 中间件链 14→15 层

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 01:38:55 +08:00
iven
c167ea4ea5 fix(v13): V13 审计 6 项修复 — TrajectoryRecorder注册 + industryStore接入 + 知识搜索 + webhook标注 + structured UI + persistent注释
FIX-01: TrajectoryRecorderMiddleware 注册到 create_middleware_chain() (@650优先级)
FIX-02: industryStore 接入 ButlerPanel 行业专长展示 + 自动拉取
FIX-03: 桌面端知识库搜索 saas-knowledge mixin + VikingPanel SaaS KB UI
FIX-04: webhook 迁移标注 deprecated + 添加 down migration 注释
FIX-05: Admin Knowledge 添加结构化数据 Tab (CRUD + 行浏览)
FIX-06: PersistentMemoryStore 精化 dead_code 标注 (完整迁移留后续)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 01:34:08 +08:00
294 changed files with 12528 additions and 8412 deletions

View File

@@ -50,7 +50,7 @@ jobs:
- name: Rust Clippy
working-directory: .
run: cargo clippy --workspace -- -D warnings
run: cargo clippy --workspace --exclude zclaw-saas -- -D warnings
- name: Install frontend dependencies
working-directory: desktop
@@ -94,7 +94,7 @@ jobs:
- name: Run Rust tests
working-directory: .
run: cargo test --workspace
run: cargo test --workspace --exclude zclaw-saas
- name: Install frontend dependencies
working-directory: desktop
@@ -138,7 +138,7 @@ jobs:
- name: Rust release build
working-directory: .
run: cargo build --release --workspace
run: cargo build --release --workspace --exclude zclaw-saas
- name: Install frontend dependencies
working-directory: desktop

View File

@@ -45,7 +45,7 @@ jobs:
- name: Run Rust tests
working-directory: .
run: cargo test --workspace
run: cargo test --workspace --exclude zclaw-saas
- name: Install frontend dependencies
working-directory: desktop

View File

@@ -227,21 +227,22 @@ Client → 负责网络通信和协议转换
## 6. 自主能力系统 (Hands)
ZCLAW 提供 11 个自主能力包(9 启用 + 2 禁用):
ZCLAW 提供 12 个自主能力包(7 已注册 + 3 开发中 + 2 禁用):
| Hand | 功能 | 状态 |
|------|------|------|
| Browser | 浏览器自动化 | ✅ 可用 |
| Collector | 数据收集聚合 | ✅ 可用 |
| Researcher | 深度研究 | ✅ 可用 |
| Predictor | 预测分析 | ❌ 已禁用 (enabled=false),无 Rust 实现 |
| Lead | 销售线索发现 | ❌ 已禁用 (enabled=false),无 Rust 实现 |
| Clip | 视频处理 | ⚠️ 需 FFmpeg |
| Twitter | Twitter 自动化 | ✅ 可用12 个 API v2 真实调用,写操作需 OAuth 1.0a |
| Whiteboard | 白板演示 | ✅ 可用(导出功能开发中,标注 demo |
| Slideshow | 幻灯片生成 | ✅ 可用 |
| Speech | 语音合成 | ✅ 可用Browser TTS 前端集成完成) |
| Quiz | 测验生成 | ✅ 可用 |
| _reminder | 系统内部提醒 | ✅ 可用kernel 编程注册,无 HAND.toml |
| Whiteboard | 白板演示 | 🚧 开发中HAND.toml 未合并到主分支) |
| Slideshow | 幻灯片生成 | 🚧 开发中HAND.toml 未合并到主分支) |
| Speech | 语音合成 | 🚧 开发中HAND.toml 未合并到主分支) |
| Predictor | 预测分析 | ❌ 已禁用 (enabled=false),无 Rust 实现 |
| Lead | 销售线索发现 | ❌ 已禁用 (enabled=false),无 Rust 实现 |
**触发 Hand 时:**
1. 检查依赖是否满足
@@ -529,7 +530,7 @@ refactor(store): 统一 Store 数据获取方式
***
<!-- ARCH-SNAPSHOT-START -->
<!-- 此区域由 auto-sync 自动更新,请勿手动编辑。更新时间: 2026-04-09 -->
<!-- 此区域由 auto-sync 自动更新,请勿手动编辑。更新时间: 2026-04-15 -->
## 13. 当前架构快照
@@ -539,13 +540,14 @@ refactor(store): 统一 Store 数据获取方式
|--------|------|----------|
| 管家模式 (Butler) | ✅ 活跃 | 04-12 行业配置4行业 + 跨会话连续性 + <butler-context> XML fencing |
| Hermes 管线 | ✅ 活跃 | 04-12 触发信号持久化 + 经验行业维度 + 注入格式优化 |
| Intelligence Heartbeat | ✅ 活跃 | 04-15 统一健康快照 (health_snapshot.rs) + HeartbeatManager 重构 + HealthPanel 前端 |
| 聊天流 (ChatStream) | ✅ 稳定 | 04-02 ChatStore 拆分为 4 Store (stream/conversation/message/chat) |
| 记忆管道 (Memory) | ✅ 稳定 | 04-02 闭环修复: 对话→提取→FTS5+TF-IDF→检索→注入 |
| 记忆管道 (Memory) | ✅ 稳定 | 04-17 E2E 验证: 存储+FTS5+TF-IDF+注入闭环,去重+跨会话注入已修复 |
| SaaS 认证 (Auth) | ✅ 稳定 | Token池 RPM/TPM 轮换 + JWT password_version 失效机制 |
| Pipeline DSL | ✅ 稳定 | 04-01 17 个 YAML 模板 + DAG 执行器 |
| Hands 系统 | ✅ 稳定 | 9 启用 (Browser/Collector/Researcher/Twitter/Whiteboard/Slideshow/Speech/Quiz/Clip) |
| Hands 系统 | ✅ 稳定 | 7 注册 (6 HAND.toml + _reminder)Whiteboard/Slideshow/Speech 开发中 |
| 技能系统 (Skills) | ✅ 稳定 | 75 个 SKILL.md + 语义路由 |
| 中间件链 | ✅ 稳定 | 14 层 ( DataMasking@90, ButlerRouter, TrajectoryRecorder@650) |
| 中间件链 | ✅ 稳定 | 14 层 (ButlerRouter@80, DataMasking@90, Compaction@100, Memory@150, Title@180, SkillIndex@200, DanglingTool@300, ToolError@350, ToolOutputGuard@360, Guardrail@400, LoopGuard@500, SubagentLimit@550, TrajectoryRecorder@650, TokenCalibration@700) |
### 关键架构模式
@@ -554,16 +556,17 @@ refactor(store): 统一 Store 数据获取方式
- **聊天流**: 3种实现 → GatewayClient(WebSocket) / KernelClient(Tauri Event) / SaaSRelay(SSE) + 5min超时守护。详见 [ARCHITECTURE_BRIEF.md](docs/ARCHITECTURE_BRIEF.md)
- **客户端路由**: `getClient()` 4分支决策树 → Admin路由 / SaaS Relay(可降级到本地) / Local Kernel / External Gateway
- **SaaS 认证**: JWT→OS keyring 存储 + HttpOnly cookie + Token池 RPM/TPM 限流轮换 + SaaS unreachable 自动降级
- **记忆闭环**: 对话→extraction_adapter→FTS5全文+TF-IDF权重→检索→注入系统提示
- **记忆闭环**: 对话→extraction_adapter→FTS5全文+TF-IDF权重→检索→注入系统提示E2E 04-17 验证通过,去重+跨会话注入已修复)
- **LLM 驱动**: 4 Rust Driver (Anthropic/OpenAI/Gemini/Local) + 国内兼容 (DeepSeek/Qwen/Moonshot 通过 base_url)
### 最近变更
1. [04-12] 行业配置+管家主动性 全栈 5 Phase: 行业数据模型+4内置配置+ButlerRouter动态关键词+触发信号+Tauri加载+Admin管理页面+跨会话连续性+XML fencing注入格式
2. [04-09] Hermes Intelligence Pipeline 4 Chunk: ExperienceStore+Extractor, UserProfileStore+Profiler, NlScheduleParser, TrajectoryRecorder+Compressor (684 tests, 0 failed)
3. [04-09] 管家模式6交付物完成: ButlerRouter + 冷启动 + 简洁模式UI + 桥测试 + 发布文档
3. [04-07] @reserved 标注 5 个 butler Tauri 命令 + 痛点持久化 SQLite
4. [04-06] 4 个发布前 bug 修复 (身份覆盖/模型配置/agent同步/自动身份)
1. [04-17] 全系统 E2E 测试 129 链路: 82 PASS / 20 PARTIAL / 1 FAIL / 26 SKIP有效通过率 79.1%。7 项 Bug 修复 (Dashboard 404/记忆去重/记忆注入/invoice_id/Prompt版本/agent隔离/行业字段)
2. [04-16] 3 项 P0 修复 + 5 项 E2E Bug 修复 + Agent 面板刷新 + TRUTH.md 数字校准
3. [04-15] Heartbeat 统一健康系统: health_snapshot.rs 统一收集器(LLM连接/记忆/会话/系统资源) + heartbeat.rs HeartbeatManager 重构 + HealthPanel.tsx 前端面板 + Tauri 命令 182→183 + intelligence 模块 15→16 文件 + 删除 intelligence-client/ 9 废弃文件
4. [04-12] 行业配置+管家主动性 全栈 5 Phase: 行业数据模型+4内置配置+ButlerRouter动态关键词+触发信号+Tauri加载+Admin管理页面+跨会话连续性+XML fencing注入格式
5. [04-09] Hermes Intelligence Pipeline 4 Chunk: ExperienceStore+Extractor, UserProfileStore+Profiler, NlScheduleParser, TrajectoryRecorder+Compressor (684 tests, 0 failed)
6. [04-09] 管家模式6交付物完成: ButlerRouter + 冷启动 + 简洁模式UI + 桥测试 + 发布文档
<!-- ARCH-SNAPSHOT-END -->

320
Cargo.lock generated
View File

@@ -61,19 +61,6 @@ dependencies = [
"subtle",
]
[[package]]
name = "ahash"
version = "0.8.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a15f179cd60c4584b8a8c596927aadc462e27f2ca70c04e0071964a73ba7a75"
dependencies = [
"cfg-if",
"getrandom 0.3.4",
"once_cell",
"version_check",
"zerocopy",
]
[[package]]
name = "aho-corasick"
version = "1.1.4"
@@ -179,7 +166,7 @@ version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "435a87a52755b8f27fcf321ac4f04b2802e337c8c4872923137471ec39c37532"
dependencies = [
"event-listener 5.4.1",
"event-listener",
"event-listener-strategy",
"futures-core",
"pin-project-lite",
@@ -235,7 +222,7 @@ version = "3.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "290f7f2596bd5b78a9fec8088ccd89180d7f9f55b94b0576823bbbdc72ee8311"
dependencies = [
"event-listener 5.4.1",
"event-listener",
"event-listener-strategy",
"pin-project-lite",
]
@@ -253,7 +240,7 @@ dependencies = [
"async-task",
"blocking",
"cfg-if",
"event-listener 5.4.1",
"event-listener",
"futures-lite",
"rustix 1.1.4",
]
@@ -1606,9 +1593,9 @@ dependencies = [
"secrecy",
"serde",
"serde_json",
"serde_yaml",
"serde_yaml_bw",
"sha2",
"sqlx 0.7.4",
"sqlx",
"tauri",
"tauri-build",
"tauri-plugin-mcp",
@@ -1974,12 +1961,6 @@ dependencies = [
"num-traits",
]
[[package]]
name = "event-listener"
version = "2.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0206175f82b8d6bf6652ff7d71a1e27fd2e4efde587fd368662814d6ec1d9ce0"
[[package]]
name = "event-listener"
version = "5.4.1"
@@ -1997,7 +1978,7 @@ version = "0.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8be9f3dfaaffdae2972880079a491a1a8bb7cbed0b8dd7a347f668b4150a3b93"
dependencies = [
"event-listener 5.4.1",
"event-listener",
"pin-project-lite",
]
@@ -2699,10 +2680,6 @@ name = "hashbrown"
version = "0.14.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1"
dependencies = [
"ahash",
"allocator-api2",
]
[[package]]
name = "hashbrown"
@@ -2726,15 +2703,6 @@ dependencies = [
"serde_core",
]
[[package]]
name = "hashlink"
version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8094feaf31ff591f651a2664fb9cfd92bba7a60ce3197265e9482ebe753c8f7"
dependencies = [
"hashbrown 0.14.5",
]
[[package]]
name = "hashlink"
version = "0.10.0"
@@ -2773,9 +2741,6 @@ name = "heck"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "heck"
@@ -3555,9 +3520,9 @@ dependencies = [
[[package]]
name = "libsqlite3-sys"
version = "0.27.0"
version = "0.30.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf4e226dcd58b4be396f7bd3c20da8fdee2911400705297ba7d2d7cc2c30f716"
checksum = "2e99fb7a497b1e3339bc746195567ed8d3e24945ecd636e3619d20b9de9e9149"
dependencies = [
"cc",
"pkg-config",
@@ -4358,12 +4323,6 @@ dependencies = [
"subtle",
]
[[package]]
name = "paste"
version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a"
[[package]]
name = "pathdiff"
version = "0.2.3"
@@ -4426,7 +4385,7 @@ version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc58e2d255979a31caa7cabfa7aac654af0354220719ab7a68520ae7a91e8c0b"
dependencies = [
"sqlx 0.8.6",
"sqlx",
]
[[package]]
@@ -5492,6 +5451,7 @@ version = "0.23.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "758025cb5fccfd3bc2fd74708fd4682be41d99e5dff73c377c0646c6012c73a4"
dependencies = [
"log",
"once_cell",
"ring",
"rustls-pki-types",
@@ -5912,19 +5872,6 @@ dependencies = [
"syn 2.0.117",
]
[[package]]
name = "serde_yaml"
version = "0.9.34+deprecated"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a8b1a1a2ebf674015cc02edccce75287f1a0130d394307b36743c2f5d504b47"
dependencies = [
"indexmap 2.13.0",
"itoa 1.0.18",
"ryu",
"serde",
"unsafe-libyaml",
]
[[package]]
name = "serde_yaml_bw"
version = "2.5.3"
@@ -6187,78 +6134,17 @@ dependencies = [
"der",
]
[[package]]
name = "sqlformat"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7bba3a93db0cc4f7bdece8bb09e77e2e785c20bfebf79eb8340ed80708048790"
dependencies = [
"nom",
"unicode_categories",
]
[[package]]
name = "sqlx"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9a2ccff1a000a5a59cd33da541d9f2fdcd9e6e8229cc200565942bff36d0aaa"
dependencies = [
"sqlx-core 0.7.4",
"sqlx-macros 0.7.4",
"sqlx-mysql",
"sqlx-postgres 0.7.4",
"sqlx-sqlite",
]
[[package]]
name = "sqlx"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fefb893899429669dcdd979aff487bd78f4064e5e7907e4269081e0ef7d97dc"
dependencies = [
"sqlx-core 0.8.6",
"sqlx-macros 0.8.6",
"sqlx-postgres 0.8.6",
]
[[package]]
name = "sqlx-core"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24ba59a9342a3d9bab6c56c118be528b27c9b60e490080e9711a04dccac83ef6"
dependencies = [
"ahash",
"atoi",
"byteorder",
"bytes",
"chrono",
"crc",
"crossbeam-queue",
"either",
"event-listener 2.5.3",
"futures-channel",
"futures-core",
"futures-intrusive",
"futures-io",
"futures-util",
"hashlink 0.8.4",
"hex",
"indexmap 2.13.0",
"log",
"memchr",
"once_cell",
"paste",
"percent-encoding",
"serde",
"serde_json",
"sha2",
"smallvec",
"sqlformat",
"thiserror 1.0.69",
"tokio",
"tokio-stream",
"tracing",
"url",
"sqlx-core",
"sqlx-macros",
"sqlx-mysql",
"sqlx-postgres",
"sqlx-sqlite",
]
[[package]]
@@ -6269,16 +6155,17 @@ checksum = "ee6798b1838b6a0f69c007c133b8df5866302197e404e8b6ee8ed3e3a5e68dc6"
dependencies = [
"base64 0.22.1",
"bytes",
"chrono",
"crc",
"crossbeam-queue",
"either",
"event-listener 5.4.1",
"event-listener",
"futures-core",
"futures-intrusive",
"futures-io",
"futures-util",
"hashbrown 0.15.5",
"hashlink 0.10.0",
"hashlink",
"indexmap 2.13.0",
"log",
"memchr",
@@ -6289,23 +6176,12 @@ dependencies = [
"sha2",
"smallvec",
"thiserror 2.0.18",
"tokio",
"tokio-stream",
"tracing",
"url",
]
[[package]]
name = "sqlx-macros"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ea40e2345eb2faa9e1e5e326db8c34711317d2b5e08d0d5741619048a803127"
dependencies = [
"proc-macro2",
"quote",
"sqlx-core 0.7.4",
"sqlx-macros-core 0.7.4",
"syn 1.0.109",
]
[[package]]
name = "sqlx-macros"
version = "0.8.6"
@@ -6314,37 +6190,11 @@ checksum = "a2d452988ccaacfbf5e0bdbc348fb91d7c8af5bee192173ac3636b5fb6e6715d"
dependencies = [
"proc-macro2",
"quote",
"sqlx-core 0.8.6",
"sqlx-macros-core 0.8.6",
"sqlx-core",
"sqlx-macros-core",
"syn 2.0.117",
]
[[package]]
name = "sqlx-macros-core"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5833ef53aaa16d860e92123292f1f6a3d53c34ba8b1969f152ef1a7bb803f3c8"
dependencies = [
"dotenvy",
"either",
"heck 0.4.1",
"hex",
"once_cell",
"proc-macro2",
"quote",
"serde",
"serde_json",
"sha2",
"sqlx-core 0.7.4",
"sqlx-mysql",
"sqlx-postgres 0.7.4",
"sqlx-sqlite",
"syn 1.0.109",
"tempfile",
"tokio",
"url",
]
[[package]]
name = "sqlx-macros-core"
version = "0.8.6"
@@ -6361,20 +6211,23 @@ dependencies = [
"serde",
"serde_json",
"sha2",
"sqlx-core 0.8.6",
"sqlx-postgres 0.8.6",
"sqlx-core",
"sqlx-mysql",
"sqlx-postgres",
"sqlx-sqlite",
"syn 2.0.117",
"tokio",
"url",
]
[[package]]
name = "sqlx-mysql"
version = "0.7.4"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ed31390216d20e538e447a7a9b959e06ed9fc51c37b514b46eb758016ecd418"
checksum = "aa003f0038df784eb8fecbbac13affe3da23b45194bd57dba231c8f48199c526"
dependencies = [
"atoi",
"base64 0.21.7",
"base64 0.22.1",
"bitflags 2.11.0",
"byteorder",
"bytes",
@@ -6403,48 +6256,9 @@ dependencies = [
"sha1 0.10.6",
"sha2",
"smallvec",
"sqlx-core 0.7.4",
"sqlx-core",
"stringprep",
"thiserror 1.0.69",
"tracing",
"whoami",
]
[[package]]
name = "sqlx-postgres"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c824eb80b894f926f89a0b9da0c7f435d27cdd35b8c655b114e58223918577e"
dependencies = [
"atoi",
"base64 0.21.7",
"bitflags 2.11.0",
"byteorder",
"chrono",
"crc",
"dotenvy",
"etcetera",
"futures-channel",
"futures-core",
"futures-io",
"futures-util",
"hex",
"hkdf",
"hmac",
"home",
"itoa 1.0.18",
"log",
"md-5",
"memchr",
"once_cell",
"rand 0.8.5",
"serde",
"serde_json",
"sha2",
"smallvec",
"sqlx-core 0.7.4",
"stringprep",
"thiserror 1.0.69",
"thiserror 2.0.18",
"tracing",
"whoami",
]
@@ -6459,6 +6273,7 @@ dependencies = [
"base64 0.22.1",
"bitflags 2.11.0",
"byteorder",
"chrono",
"crc",
"dotenvy",
"etcetera",
@@ -6479,7 +6294,7 @@ dependencies = [
"serde_json",
"sha2",
"smallvec",
"sqlx-core 0.8.6",
"sqlx-core",
"stringprep",
"thiserror 2.0.18",
"tracing",
@@ -6488,9 +6303,9 @@ dependencies = [
[[package]]
name = "sqlx-sqlite"
version = "0.7.4"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b244ef0a8414da0bed4bb1910426e890b19e5e9bccc27ada6b797d05c55ae0aa"
checksum = "c2d12fe70b2c1b4401038055f90f151b78208de1f9f89a7dbfd41587a10c3eea"
dependencies = [
"atoi",
"chrono",
@@ -6504,10 +6319,11 @@ dependencies = [
"log",
"percent-encoding",
"serde",
"sqlx-core 0.7.4",
"serde_urlencoded",
"sqlx-core",
"thiserror 2.0.18",
"tracing",
"url",
"urlencoding",
]
[[package]]
@@ -7824,12 +7640,6 @@ version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853"
[[package]]
name = "unicode_categories"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "39ec24b3121d976906ece63c9daad25b85969647682eee313cb5779fdd69e14e"
[[package]]
name = "universal-hash"
version = "0.5.1"
@@ -7840,12 +7650,6 @@ dependencies = [
"subtle",
]
[[package]]
name = "unsafe-libyaml"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "673aac59facbab8a9007c7f6108d11f63b603f7cabff99fabf650fea5c32b861"
[[package]]
name = "unsafe-libyaml-norway"
version = "0.2.15"
@@ -7858,6 +7662,35 @@ version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
[[package]]
name = "ureq"
version = "3.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dea7109cdcd5864d4eeb1b58a1648dc9bf520360d7af16ec26d0a9354bafcfc0"
dependencies = [
"base64 0.22.1",
"flate2",
"log",
"percent-encoding",
"rustls",
"rustls-pki-types",
"ureq-proto",
"utf8-zero",
"webpki-roots",
]
[[package]]
name = "ureq-proto"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e994ba84b0bd1b1b0cf92878b7ef898a5c1760108fe7b6010327e274917a808c"
dependencies = [
"base64 0.22.1",
"http 1.4.0",
"httparse",
"log",
]
[[package]]
name = "url"
version = "2.5.8"
@@ -7895,6 +7728,12 @@ version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09cc8ee72d2a9becf2f2febe0205bbed8fc6615b7cb429ad062dc7b7ddd036a9"
[[package]]
name = "utf8-zero"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8c0a043c9540bae7c578c88f91dda8bd82e59ae27c21baca69c8b191aaf5a6e"
[[package]]
name = "utf8_iter"
version = "1.0.4"
@@ -9573,7 +9412,7 @@ dependencies = [
"async-trait",
"blocking",
"enumflags2",
"event-listener 5.4.1",
"event-listener",
"futures-core",
"futures-lite",
"hex",
@@ -9630,7 +9469,7 @@ dependencies = [
"libsqlite3-sys",
"serde",
"serde_json",
"sqlx 0.7.4",
"sqlx",
"thiserror 2.0.18",
"tokio",
"tokio-test",
@@ -9696,7 +9535,7 @@ dependencies = [
"libsqlite3-sys",
"serde",
"serde_json",
"sqlx 0.7.4",
"sqlx",
"thiserror 2.0.18",
"tokio",
"tracing",
@@ -9723,7 +9562,6 @@ dependencies = [
"tracing",
"uuid",
"zclaw-hands",
"zclaw-kernel",
"zclaw-runtime",
"zclaw-skills",
"zclaw-types",
@@ -9809,7 +9647,7 @@ dependencies = [
"serde_json",
"sha2",
"socket2 0.5.10",
"sqlx 0.7.4",
"sqlx",
"tempfile",
"thiserror 2.0.18",
"tokio",
@@ -9840,6 +9678,8 @@ dependencies = [
"thiserror 2.0.18",
"tokio",
"tracing",
"ureq",
"url",
"uuid",
"wasmtime",
"wasmtime-wasi",

View File

@@ -57,12 +57,15 @@ chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1", features = ["v4", "v5", "serde"] }
# Database
sqlx = { version = "0.7", features = ["runtime-tokio", "sqlite", "postgres", "chrono"] }
libsqlite3-sys = { version = "0.27", features = ["bundled"] }
sqlx = { version = "0.8", features = ["runtime-tokio", "sqlite", "postgres", "chrono"] }
libsqlite3-sys = { version = "0.30", features = ["bundled"] }
# HTTP client (for LLM drivers)
reqwest = { version = "0.12", default-features = false, features = ["json", "stream", "rustls-tls"] }
# Synchronous HTTP (for WASM host functions in blocking threads)
ureq = { version = "3", features = ["rustls"] }
# URL parsing
url = "2"

View File

@@ -117,7 +117,7 @@ function Sidebar({
const isActive =
item.path === '/'
? activePath === '/'
: activePath.startsWith(item.path)
: activePath === item.path || activePath.startsWith(item.path + '/')
const btn = (
<button

View File

@@ -9,6 +9,7 @@ import type { ProColumns } from '@ant-design/pro-components'
import { ProTable } from '@ant-design/pro-components'
import { accountService } from '@/services/accounts'
import { industryService } from '@/services/industries'
import { billingService } from '@/services/billing'
import { PageHeader } from '@/components/PageHeader'
import type { AccountPublic } from '@/types'
@@ -70,6 +71,12 @@ export default function Accounts() {
}
}, [accountIndustries, editingId, form])
// 获取所有活跃计划(用于管理员切换)
const { data: plansData } = useQuery({
queryKey: ['billing-plans'],
queryFn: ({ signal }) => billingService.listPlans(signal),
})
const updateMutation = useMutation({
mutationFn: ({ id, data }: { id: string; data: Partial<AccountPublic> }) =>
accountService.update(id, data),
@@ -101,6 +108,14 @@ export default function Accounts() {
onError: (err: Error) => message.error(err.message || '行业授权更新失败'),
})
// 管理员切换用户计划
const switchPlanMutation = useMutation({
mutationFn: ({ accountId, planId }: { accountId: string; planId: string }) =>
billingService.adminSwitchPlan(accountId, planId),
onSuccess: () => message.success('计划切换成功'),
onError: (err: Error) => message.error(err.message || '计划切换失败'),
})
const columns: ProColumns<AccountPublic>[] = [
{ title: '用户名', dataIndex: 'username', width: 120, tooltip: '搜索用户名、邮箱或显示名' },
{ title: '显示名', dataIndex: 'display_name', width: 120, hideInSearch: true },
@@ -186,7 +201,7 @@ export default function Accounts() {
try {
// 更新基础信息
const { industry_ids, ...accountData } = values
const { industry_ids, plan_id, ...accountData } = values
await updateMutation.mutateAsync({ id: editingId, data: accountData })
// 更新行业授权(如果变更了)
@@ -201,6 +216,11 @@ export default function Accounts() {
queryClient.invalidateQueries({ queryKey: ['account-industries'] })
}
// 切换订阅计划(如果选择了新计划)
if (plan_id) {
await switchPlanMutation.mutateAsync({ accountId: editingId, planId: plan_id })
}
handleClose()
} catch {
// Errors handled by mutation onError callbacks
@@ -218,6 +238,11 @@ export default function Accounts() {
label: `${item.icon} ${item.name}`,
}))
const planOptions = (plansData || []).map((plan) => ({
value: plan.id,
label: `${plan.display_name}${(plan.price_cents / 100).toFixed(0)}/月)`,
}))
return (
<div>
<PageHeader title="账号管理" description="管理系统用户账号、角色、权限与行业授权" />
@@ -256,7 +281,7 @@ export default function Accounts() {
open={modalOpen}
onOk={handleSave}
onCancel={handleClose}
confirmLoading={updateMutation.isPending || setIndustriesMutation.isPending}
confirmLoading={updateMutation.isPending || setIndustriesMutation.isPending || switchPlanMutation.isPending}
width={560}
>
<Form form={form} layout="vertical" className="mt-4">
@@ -280,6 +305,21 @@ export default function Accounts() {
]} />
</Form.Item>
<Divider></Divider>
<Form.Item
name="plan_id"
label="切换计划"
extra="选择新计划后保存将立即切换。留空则不修改当前计划。"
>
<Select
allowClear
placeholder="不修改当前计划"
options={planOptions}
loading={!plansData}
/>
</Form.Item>
<Divider></Divider>
<Form.Item

View File

@@ -0,0 +1,169 @@
import { useState } from 'react'
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'
import { Button, message, Tag, Modal, Form, Input, InputNumber, Select, Space, Popconfirm, Typography } from 'antd'
import { PlusOutlined, CopyOutlined } from '@ant-design/icons'
import { ProTable } from '@ant-design/pro-components'
import type { ProColumns } from '@ant-design/pro-components'
import { apiKeyService } from '@/services/api-keys'
import type { TokenInfo } from '@/types'
const { Text, Paragraph } = Typography
const PERMISSION_OPTIONS = [
{ label: 'Relay Chat', value: 'relay:use' },
{ label: 'Knowledge Read', value: 'knowledge:read' },
{ label: 'Knowledge Write', value: 'knowledge:write' },
{ label: 'Agent Read', value: 'agent:read' },
{ label: 'Agent Write', value: 'agent:write' },
]
export default function ApiKeys() {
const queryClient = useQueryClient()
const [form] = Form.useForm()
const [createOpen, setCreateOpen] = useState(false)
const [newToken, setNewToken] = useState<string | null>(null)
const [page, setPage] = useState(1)
const [pageSize, setPageSize] = useState(20)
const { data, isLoading } = useQuery({
queryKey: ['api-keys', page, pageSize],
queryFn: ({ signal }) => apiKeyService.list({ page, page_size: pageSize }, signal),
})
const createMutation = useMutation({
mutationFn: (values: { name: string; expires_days?: number; permissions: string[] }) =>
apiKeyService.create(values),
onSuccess: (result: TokenInfo) => {
message.success('API 密钥创建成功')
if (result.token) {
setNewToken(result.token)
}
queryClient.invalidateQueries({ queryKey: ['api-keys'] })
form.resetFields()
},
onError: (err: Error) => message.error(err.message || '创建失败'),
})
const revokeMutation = useMutation({
mutationFn: (id: string) => apiKeyService.revoke(id),
onSuccess: () => {
message.success('密钥已吊销')
queryClient.invalidateQueries({ queryKey: ['api-keys'] })
},
onError: (err: Error) => message.error(err.message || '吊销失败'),
})
const handleCreate = async () => {
const values = await form.validateFields()
createMutation.mutate(values)
}
const columns: ProColumns<TokenInfo>[] = [
{ title: '名称', dataIndex: 'name', width: 180 },
{
title: '前缀',
dataIndex: 'token_prefix',
width: 120,
render: (val: string) => <Text code>{val}...</Text>,
},
{
title: '权限',
dataIndex: 'permissions',
width: 240,
render: (perms: string[]) =>
perms?.map((p) => <Tag key={p}>{p}</Tag>) || '-',
},
{
title: '最后使用',
dataIndex: 'last_used_at',
width: 180,
render: (val: string) => (val ? new Date(val).toLocaleString() : <Text type="secondary">使</Text>),
},
{
title: '过期时间',
dataIndex: 'expires_at',
width: 180,
render: (val: string) =>
val ? new Date(val).toLocaleString() : <Text type="secondary"></Text>,
},
{
title: '创建时间',
dataIndex: 'created_at',
width: 180,
render: (val: string) => new Date(val).toLocaleString(),
},
{
title: '操作',
width: 100,
render: (_: unknown, record: TokenInfo) => (
<Popconfirm
title="确定吊销此密钥?"
description="吊销后使用该密钥的所有请求将被拒绝"
onConfirm={() => revokeMutation.mutate(record.id)}
>
<Button danger size="small"></Button>
</Popconfirm>
),
},
]
return (
<div style={{ padding: 24 }}>
<ProTable<TokenInfo>
columns={columns}
dataSource={data?.items || []}
loading={isLoading}
rowKey="id"
search={false}
pagination={{
current: page,
pageSize,
total: data?.total || 0,
onChange: (p, ps) => { setPage(p); setPageSize(ps) },
}}
toolBarRender={() => [
<Button key="create" type="primary" icon={<PlusOutlined />} onClick={() => setCreateOpen(true)}>
</Button>,
]}
/>
<Modal
title="创建 API 密钥"
open={createOpen}
onOk={handleCreate}
onCancel={() => { setCreateOpen(false); setNewToken(null); form.resetFields() }}
confirmLoading={createMutation.isPending}
destroyOnHidden
>
{newToken ? (
<div style={{ marginBottom: 16 }}>
<Paragraph type="warning">
</Paragraph>
<Space>
<Text code style={{ fontSize: 13 }}>{newToken}</Text>
<Button
icon={<CopyOutlined />}
size="small"
onClick={() => { navigator.clipboard.writeText(newToken); message.success('已复制') }}
/>
</Space>
</div>
) : (
<Form form={form} layout="vertical">
<Form.Item name="name" label="密钥名称" rules={[{ required: true, message: '请输入名称' }]}>
<Input placeholder="例如: 生产环境 API Key" />
</Form.Item>
<Form.Item name="expires_days" label="有效期 (天)">
<InputNumber min={1} max={3650} placeholder="留空表示永不过期" style={{ width: '100%' }} />
</Form.Item>
<Form.Item name="permissions" label="权限" rules={[{ required: true, message: '请选择至少一项权限' }]}>
<Select mode="multiple" options={PERMISSION_OPTIONS} placeholder="选择权限" />
</Form.Item>
</Form>
)}
</Modal>
</div>
)
}

View File

@@ -144,7 +144,7 @@ function IndustryListPanel() {
rowKey="id"
search={{
onReset: () => { setFilters({}); setPage(1) },
onSearch: (values) => { setFilters(values); setPage(1) },
onSubmit: (values) => { setFilters(values); setPage(1) },
}}
toolBarRender={() => [
<Button key="create" type="primary" icon={<PlusOutlined />} onClick={() => setCreateOpen(true)}>
@@ -225,7 +225,7 @@ function IndustryEditModal({ open, industryId, onClose }: {
onOk={() => form.submit()}
confirmLoading={updateMutation.isPending}
width={720}
destroyOnClose
destroyOnHidden
>
{isLoading ? (
<div className="flex justify-center py-8"><Spin /></div>
@@ -300,7 +300,7 @@ function IndustryCreateModal({ open, onClose }: {
onOk={() => form.submit()}
confirmLoading={createMutation.isPending}
width={640}
destroyOnClose
destroyOnHidden
>
<Form
form={form}

View File

@@ -19,6 +19,8 @@ import type { ProColumns } from '@ant-design/pro-components'
import { ProTable } from '@ant-design/pro-components'
import { knowledgeService } from '@/services/knowledge'
import type { CategoryResponse, KnowledgeItem, SearchResult } from '@/services/knowledge'
import type { StructuredSource } from '@/services/knowledge'
import { TableOutlined } from '@ant-design/icons'
const { TextArea } = Input
const { Text, Title } = Typography
@@ -331,7 +333,7 @@ function ItemsPanel() {
rowKey="id"
search={{
onReset: () => { setFilters({}); setPage(1) },
onSearch: (values) => { setFilters(values); setPage(1) },
onSubmit: (values) => { setFilters(values); setPage(1) },
}}
toolBarRender={() => [
<Button key="create" type="primary" icon={<PlusOutlined />} onClick={() => setCreateOpen(true)}>
@@ -708,12 +710,138 @@ export default function Knowledge() {
icon: <BarChartOutlined />,
children: <AnalyticsPanel />,
},
{
key: 'structured',
label: '结构化数据',
icon: <TableOutlined />,
children: <StructuredSourcesPanel />,
},
]}
/>
</div>
)
}
// === Structured Data Sources Panel ===
function StructuredSourcesPanel() {
const queryClient = useQueryClient()
const [viewingRows, setViewingRows] = useState<string | null>(null)
const { data: sources = [], isLoading } = useQuery({
queryKey: ['structured-sources'],
queryFn: ({ signal }) => knowledgeService.listStructuredSources(signal),
})
const { data: rows = [], isLoading: rowsLoading } = useQuery({
queryKey: ['structured-rows', viewingRows],
queryFn: ({ signal }) => knowledgeService.listStructuredRows(viewingRows!, signal),
enabled: !!viewingRows,
})
const deleteMutation = useMutation({
mutationFn: (id: string) => knowledgeService.deleteStructuredSource(id),
onSuccess: () => {
message.success('数据源已删除')
queryClient.invalidateQueries({ queryKey: ['structured-sources'] })
},
onError: (err: Error) => message.error(err.message || '删除失败'),
})
const columns: ProColumns<StructuredSource>[] = [
{ title: '名称', dataIndex: 'name', key: 'name', width: 200 },
{ title: '类型', dataIndex: 'source_type', key: 'source_type', width: 120, render: (v: string) => <Tag>{v}</Tag> },
{ title: '行数', dataIndex: 'row_count', key: 'row_count', width: 80 },
{
title: '列',
dataIndex: 'columns',
key: 'columns',
width: 250,
render: (cols: string[]) => (
<Space size={[4, 4]} wrap>
{(cols ?? []).slice(0, 5).map((c) => (
<Tag key={c} color="blue">{c}</Tag>
))}
{(cols ?? []).length > 5 && <Tag>+{(cols as string[]).length - 5}</Tag>}
</Space>
),
},
{
title: '创建时间',
dataIndex: 'created_at',
key: 'created_at',
width: 160,
render: (v: string) => new Date(v).toLocaleString('zh-CN'),
},
{
title: '操作',
key: 'actions',
width: 140,
render: (_: unknown, record: StructuredSource) => (
<Space>
<Button type="link" size="small" onClick={() => setViewingRows(record.id)}>
</Button>
<Popconfirm title="确认删除此数据源?" onConfirm={() => deleteMutation.mutate(record.id)}>
<Button type="link" size="small" danger>
</Button>
</Popconfirm>
</Space>
),
},
]
// Dynamically generate row columns from the first row's keys
const rowColumns = rows.length > 0
? Object.keys(rows[0].row_data).map((key) => ({
title: key,
dataIndex: ['row_data', key],
key,
ellipsis: true,
render: (v: unknown) => String(v ?? ''),
}))
: []
return (
<div className="space-y-4">
{viewingRows ? (
<Card
title="数据行"
extra={<Button onClick={() => setViewingRows(null)}></Button>}
>
{rowsLoading ? (
<Spin />
) : rows.length === 0 ? (
<Empty description="暂无数据" />
) : (
<Table
dataSource={rows}
columns={rowColumns}
rowKey="id"
size="small"
scroll={{ x: true }}
pagination={{ pageSize: 20 }}
/>
)}
</Card>
) : (
<ProTable<StructuredSource>
dataSource={sources}
columns={columns}
loading={isLoading}
rowKey="id"
search={false}
pagination={{ pageSize: 20 }}
toolBarRender={false}
/>
)}
</div>
)
}
// === 辅助函数 ===
// === 辅助函数 ===
function flattenCategories(cats: CategoryResponse[]): { id: string; name: string }[] {

View File

@@ -327,7 +327,7 @@ export default function ScheduledTasks() {
onCancel={closeModal}
confirmLoading={createMutation.isPending || updateMutation.isPending}
width={520}
destroyOnClose
destroyOnHidden
>
<Form form={form} layout="vertical" className="mt-4">
<Form.Item

View File

@@ -26,7 +26,7 @@ export const router = createBrowserRouter([
{ path: 'providers', lazy: () => import('@/pages/ModelServices').then((m) => ({ Component: m.default })) },
{ path: 'models', lazy: () => import('@/pages/ModelServices').then((m) => ({ Component: m.default })) },
{ path: 'agent-templates', lazy: () => import('@/pages/AgentTemplates').then((m) => ({ Component: m.default })) },
{ path: 'api-keys', lazy: () => import('@/pages/ModelServices').then((m) => ({ Component: m.default })) },
{ path: 'api-keys', lazy: () => import('@/pages/ApiKeys').then((m) => ({ Component: m.default })) },
{ path: 'usage', lazy: () => import('@/pages/Usage').then((m) => ({ Component: m.default })) },
{ path: 'billing', lazy: () => import('@/pages/Billing').then((m) => ({ Component: m.default })) },
{ path: 'relay', lazy: () => import('@/pages/Relay').then((m) => ({ Component: m.default })) },

View File

@@ -1,13 +1,15 @@
import request, { withSignal } from './request'
import type { TokenInfo, CreateTokenRequest, PaginatedResponse } from '@/types'
// 使用 /tokens 路由 (api_tokens 表),前端 UI 字段 {name, expires_days, permissions} 与此后端匹配
// 注: /keys 路由 (account_api_keys 表) 需要 {provider_id, key_value},属于不同的 Key 管理系统
export const apiKeyService = {
list: (params?: Record<string, unknown>, signal?: AbortSignal) =>
request.get<PaginatedResponse<TokenInfo>>('/keys', withSignal({ params }, signal)).then((r) => r.data),
request.get<PaginatedResponse<TokenInfo>>('/tokens', withSignal({ params }, signal)).then((r) => r.data),
create: (data: CreateTokenRequest, signal?: AbortSignal) =>
request.post<TokenInfo>('/keys', data, withSignal({}, signal)).then((r) => r.data),
request.post<TokenInfo>('/tokens', data, withSignal({}, signal)).then((r) => r.data),
revoke: (id: string, signal?: AbortSignal) =>
request.delete(`/keys/${id}`, withSignal({}, signal)).then((r) => r.data),
request.delete(`/tokens/${id}`, withSignal({}, signal)).then((r) => r.data),
}

View File

@@ -90,4 +90,9 @@ export const billingService = {
getPaymentStatus: (id: string, signal?: AbortSignal) =>
request.get<PaymentStatus>(`/billing/payments/${id}`, withSignal({}, signal))
.then((r) => r.data),
/** 管理员切换用户订阅计划 (super_admin only) */
adminSwitchPlan: (accountId: string, planId: string) =>
request.put<{ success: boolean; subscription: Subscription }>(`/admin/accounts/${accountId}/subscription`, { plan_id: planId })
.then((r) => r.data),
}

View File

@@ -62,6 +62,33 @@ export interface ListItemsResponse {
page_size: number
}
// === Structured Data Sources ===
export interface StructuredSource {
id: string
account_id: string
name: string
source_type: string
row_count: number
columns: string[]
created_at: string
updated_at: string
}
export interface StructuredRow {
id: string
source_id: string
row_data: Record<string, unknown>
created_at: string
}
export interface StructuredQueryResult {
row_id: string
source_name: string
row_data: Record<string, unknown>
score: number
}
// === Service ===
export const knowledgeService = {
@@ -159,4 +186,23 @@ export const knowledgeService = {
// 导入
importItems: (data: { category_id: string; files: Array<{ content: string; title?: string; keywords?: string[]; tags?: string[] }> }) =>
request.post('/knowledge/items/import', data).then((r) => r.data),
// === Structured Data Sources ===
listStructuredSources: (signal?: AbortSignal) =>
request.get<StructuredSource[]>('/structured/sources', withSignal({}, signal))
.then((r) => r.data),
getStructuredSource: (id: string, signal?: AbortSignal) =>
request.get<StructuredSource>(`/structured/sources/${id}`, withSignal({}, signal))
.then((r) => r.data),
deleteStructuredSource: (id: string) =>
request.delete(`/structured/sources/${id}`).then((r) => r.data),
listStructuredRows: (sourceId: string, signal?: AbortSignal) =>
request.get<StructuredRow[]>(`/structured/sources/${sourceId}/rows`, withSignal({}, signal))
.then((r) => r.data),
queryStructured: (data: { source_id?: string; query?: string; limit?: number }) =>
request.post<StructuredQueryResult[]>('/structured/query', data).then((r) => r.data),
}

View File

@@ -3,5 +3,5 @@ import type { DashboardStats } from '@/types'
export const statsService = {
dashboard: (signal?: AbortSignal) =>
request.get<DashboardStats>('/stats/dashboard', withSignal({}, signal)).then((r) => r.data),
request.get<DashboardStats>('/admin/dashboard', withSignal({}, signal)).then((r) => r.data),
}

View File

@@ -20,7 +20,7 @@ export default defineConfig({
timeout: 600_000,
proxyTimeout: 600_000,
},
'/api': {
'/api/': {
target: 'http://localhost:8080',
changeOrigin: true,
timeout: 30_000,

View File

@@ -0,0 +1,305 @@
//! 进化引擎中枢
//! 协调 L1/L2/L3 三层进化的触发和执行
//! L1 (记忆进化) 在 GrowthIntegration 中处理
//! L2 (技能进化) 通过 PatternAggregator + SkillGenerator + QualityGate 协调
//! L3 (工作流进化) 通过 WorkflowComposer 协调
//! 反馈闭环通过 FeedbackCollector 管理
use std::sync::Arc;
use crate::experience_store::ExperienceStore;
use crate::feedback_collector::{
FeedbackCollector, FeedbackEntry, TrustUpdate,
};
use crate::pattern_aggregator::{AggregatedPattern, PatternAggregator};
use crate::quality_gate::{QualityGate, QualityReport};
use crate::skill_generator::{SkillCandidate, SkillGenerator};
use crate::workflow_composer::{ToolChainPattern, WorkflowComposer};
use crate::VikingAdapter;
use zclaw_types::Result;
/// 进化引擎配置
#[derive(Debug, Clone)]
pub struct EvolutionConfig {
/// 经验复用次数达到此阈值触发 L2
pub min_reuse_for_skill: u32,
/// 置信度阈值
pub quality_confidence_threshold: f32,
/// 是否启用进化引擎
pub enabled: bool,
}
impl Default for EvolutionConfig {
fn default() -> Self {
Self {
min_reuse_for_skill: 3,
quality_confidence_threshold: 0.7,
enabled: true,
}
}
}
/// 进化引擎中枢
pub struct EvolutionEngine {
viking: Arc<VikingAdapter>,
feedback: Arc<tokio::sync::Mutex<FeedbackCollector>>,
config: EvolutionConfig,
}
impl EvolutionEngine {
pub fn new(viking: Arc<VikingAdapter>) -> Self {
Self {
viking: viking.clone(),
feedback: Arc::new(tokio::sync::Mutex::new(
FeedbackCollector::with_viking(viking),
)),
config: EvolutionConfig::default(),
}
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// Backward-compatible constructor
/// 从 ExperienceStore 中提取共享的 VikingAdapter 实例
pub fn from_experience_store(experience_store: Arc<ExperienceStore>) -> Self {
let viking = experience_store.viking().clone();
Self {
viking: viking.clone(),
feedback: Arc::new(tokio::sync::Mutex::new(
FeedbackCollector::with_viking(viking),
)),
config: EvolutionConfig::default(),
}
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
pub fn with_config(mut self, config: EvolutionConfig) -> Self {
self.config = config;
self
}
pub fn set_enabled(&mut self, enabled: bool) {
self.config.enabled = enabled;
}
/// L2 检查:是否有可进化的模式
pub async fn check_evolvable_patterns(
&self,
agent_id: &str,
) -> Result<Vec<AggregatedPattern>> {
if !self.config.enabled {
return Ok(Vec::new());
}
let store = ExperienceStore::new(self.viking.clone());
let aggregator = PatternAggregator::new(store);
aggregator
.find_evolvable_patterns(agent_id, self.config.min_reuse_for_skill)
.await
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// L2 执行:为给定模式构建技能生成 prompt
/// 返回 (prompt_string, pattern) 供上层通过 LLM 调用后 parse
pub fn build_skill_prompt(&self, pattern: &AggregatedPattern) -> String {
SkillGenerator::build_prompt(pattern)
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// L2 执行:解析 LLM 返回的技能 JSON 并进行质量门控
pub fn validate_skill_candidate(
&self,
json_str: &str,
pattern: &AggregatedPattern,
existing_triggers: Vec<String>,
) -> Result<(SkillCandidate, QualityReport)> {
let candidate = SkillGenerator::parse_response(json_str, pattern)?;
let gate = QualityGate::new(self.config.quality_confidence_threshold, existing_triggers);
let report = gate.validate_skill(&candidate);
Ok((candidate, report))
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// 获取当前配置
pub fn config(&self) -> &EvolutionConfig {
&self.config
}
// -----------------------------------------------------------------------
// L3: 工作流进化
// -----------------------------------------------------------------------
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// L3: 从轨迹数据中提取重复的工具链模式
pub fn analyze_trajectory_patterns(
&self,
trajectories: &[(String, Vec<String>)], // (session_id, tools_used)
) -> Vec<(ToolChainPattern, Vec<String>)> {
if !self.config.enabled {
return Vec::new();
}
WorkflowComposer::extract_patterns(trajectories)
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// L3: 为给定工具链模式构建工作流生成 prompt
pub fn build_workflow_prompt(
&self,
pattern: &ToolChainPattern,
frequency: usize,
industry: Option<&str>,
) -> String {
WorkflowComposer::build_prompt(pattern, frequency, industry)
}
// -----------------------------------------------------------------------
// 反馈闭环
// -----------------------------------------------------------------------
/// 提交反馈并获取信任度更新,自动持久化
pub async fn submit_feedback(&self, entry: FeedbackEntry) -> TrustUpdate {
let mut feedback = self.feedback.lock().await;
let update = feedback.submit_feedback(entry);
// 非阻塞持久化:失败仅打日志,不影响返回值
if let Err(e) = feedback.save().await {
tracing::warn!("[EvolutionEngine] Failed to persist trust records: {}", e);
}
update
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// 获取需要优化的进化产物
pub async fn get_artifacts_needing_optimization(&self) -> Vec<String> {
self.feedback
.lock()
.await
.get_artifacts_needing_optimization()
.iter()
.map(|r| r.artifact_id.clone())
.collect()
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// 获取建议归档的进化产物
pub async fn get_artifacts_to_archive(&self) -> Vec<String> {
self.feedback
.lock()
.await
.get_artifacts_to_archive()
.iter()
.map(|r| r.artifact_id.clone())
.collect()
}
/// @reserved: EvolutionEngine L2/L3 feature, post-release integration
/// 获取推荐产物
pub async fn get_recommended_artifacts(&self) -> Vec<String> {
self.feedback
.lock()
.await
.get_recommended_artifacts()
.iter()
.map(|r| r.artifact_id.clone())
.collect()
}
/// 启动时加载已持久化的信任度记录
pub async fn load_feedback(&self) -> Result<usize> {
self.feedback
.lock()
.await
.load()
.await
.map_err(|e| zclaw_types::ZclawError::Internal(e))
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::experience_store::Experience;
#[tokio::test]
async fn test_disabled_returns_empty() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let mut engine = EvolutionEngine::new(viking);
engine.set_enabled(false);
let patterns = engine.check_evolvable_patterns("agent-1").await.unwrap();
assert!(patterns.is_empty());
}
#[tokio::test]
async fn test_no_evolvable_patterns() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let engine = EvolutionEngine::new(viking);
let patterns = engine.check_evolvable_patterns("unknown-agent").await.unwrap();
assert!(patterns.is_empty());
}
#[tokio::test]
async fn test_finds_evolvable_pattern() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let store_inner = ExperienceStore::new(viking.clone());
let mut exp = Experience::new(
"agent-1",
"report generation",
"researcher",
vec!["query db".into(), "format".into()],
"success",
);
exp.reuse_count = 5;
store_inner.store_experience(&exp).await.unwrap();
let engine = EvolutionEngine::new(viking);
let patterns = engine.check_evolvable_patterns("agent-1").await.unwrap();
assert_eq!(patterns.len(), 1);
assert_eq!(patterns[0].pain_pattern, "report generation");
}
#[test]
fn test_build_skill_prompt() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let engine = EvolutionEngine::new(viking);
let exp = Experience::new(
"a", "report", "researcher", vec!["step1".into()], "ok",
);
let pattern = AggregatedPattern {
pain_pattern: "report".to_string(),
experiences: vec![exp],
common_steps: vec!["step1".into()],
total_reuse: 5,
tools_used: vec!["researcher".into()],
industry_context: None,
};
let prompt = engine.build_skill_prompt(&pattern);
assert!(prompt.contains("report"));
}
#[test]
fn test_validate_skill_candidate() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let engine = EvolutionEngine::new(viking);
let exp = Experience::new(
"a", "report", "researcher", vec!["step1".into()], "ok",
);
let pattern = AggregatedPattern {
pain_pattern: "report".to_string(),
experiences: vec![exp],
common_steps: vec!["step1".into()],
total_reuse: 5,
tools_used: vec!["researcher".into()],
industry_context: None,
};
let json = r##"{"name":"报表技能","description":"生成报表","triggers":["报表","日报"],"tools":["researcher"],"body_markdown":"# 报表\n步骤","confidence":0.9}"##;
let (candidate, report) = engine
.validate_skill_candidate(json, &pattern, vec!["搜索".to_string()])
.unwrap();
assert_eq!(candidate.name, "报表技能");
assert!(report.passed);
}
}

View File

@@ -0,0 +1,119 @@
//! 结构化经验提取器
//! 从对话中提取 ExperienceCandidatepain_pattern → solution_steps → outcome
//! 持久化到 ExperienceStore
use std::sync::Arc;
use crate::experience_store::ExperienceStore;
use crate::types::{CombinedExtraction, Outcome};
/// 结构化经验提取器
/// LLM 调用已由上层 MemoryExtractor 完成,这里只做解析和持久化
pub struct ExperienceExtractor {
store: Option<Arc<ExperienceStore>>,
}
impl ExperienceExtractor {
pub fn new() -> Self {
Self { store: None }
}
pub fn with_store(mut self, store: Arc<ExperienceStore>) -> Self {
self.store = Some(store);
self
}
/// 从 CombinedExtraction 中提取经验并持久化
/// LLM 调用已由上层完成,这里只做解析和存储
pub async fn persist_experiences(
&self,
agent_id: &str,
extraction: &CombinedExtraction,
) -> zclaw_types::Result<usize> {
let store = match &self.store {
Some(s) => s,
None => return Ok(0),
};
let mut count = 0;
for candidate in &extraction.experiences {
if candidate.confidence < 0.6 {
continue;
}
let outcome_str = match candidate.outcome {
Outcome::Success => "success",
Outcome::Partial => "partial",
Outcome::Failed => "failed",
};
let mut exp = crate::experience_store::Experience::new(
agent_id,
&candidate.pain_pattern,
&candidate.context,
candidate.solution_steps.clone(),
outcome_str,
);
// 填充 tool_used取 tools_used 中的第一个作为主要工具
exp.tool_used = candidate.tools_used.first().cloned();
exp.industry_context = candidate.industry_context.clone();
store.store_experience(&exp).await?;
count += 1;
}
Ok(count)
}
}
impl Default for ExperienceExtractor {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::types::{ExperienceCandidate, Outcome};
#[test]
fn test_extractor_new_without_store() {
let ext = ExperienceExtractor::new();
assert!(ext.store.is_none());
}
#[tokio::test]
async fn test_persist_no_store_returns_zero() {
let ext = ExperienceExtractor::new();
let extraction = CombinedExtraction::default();
let count = ext.persist_experiences("agent1", &extraction).await.unwrap();
assert_eq!(count, 0);
}
#[tokio::test]
async fn test_persist_filters_low_confidence() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let store = Arc::new(ExperienceStore::new(viking));
let ext = ExperienceExtractor::new().with_store(store);
let mut extraction = CombinedExtraction::default();
extraction.experiences.push(ExperienceCandidate {
pain_pattern: "low confidence task".to_string(),
context: "should be filtered".to_string(),
solution_steps: vec!["step1".to_string()],
outcome: Outcome::Success,
confidence: 0.3, // 低于 0.6 阈值
tools_used: vec![],
industry_context: None,
});
extraction.experiences.push(ExperienceCandidate {
pain_pattern: "high confidence task".to_string(),
context: "should be stored".to_string(),
solution_steps: vec!["step1".to_string(), "step2".to_string()],
outcome: Outcome::Success,
confidence: 0.9,
tools_used: vec!["researcher".to_string()],
industry_context: Some("healthcare".to_string()),
});
let count = ext.persist_experiences("agent-1", &extraction).await.unwrap();
assert_eq!(count, 1); // 只有 1 个通过置信度过滤
}
}

View File

@@ -48,6 +48,9 @@ pub struct Experience {
/// Which trigger signal produced this experience.
#[serde(default)]
pub source_trigger: Option<String>,
/// Primary tool/skill used to resolve this pain point.
#[serde(default)]
pub tool_used: Option<String>,
}
impl Experience {
@@ -72,6 +75,7 @@ impl Experience {
updated_at: now,
industry_context: None,
source_trigger: None,
tool_used: None,
}
}
@@ -109,6 +113,11 @@ impl ExperienceStore {
Self { viking }
}
/// Get a reference to the underlying VikingAdapter.
pub fn viking(&self) -> &Arc<VikingAdapter> {
&self.viking
}
/// Store (or overwrite) an experience. The URI is derived from
/// `agent_id + pain_pattern`, ensuring one experience per pattern.
pub async fn store_experience(&self, exp: &Experience) -> zclaw_types::Result<()> {
@@ -119,6 +128,9 @@ impl ExperienceStore {
if let Some(ref industry) = exp.industry_context {
keywords.push(industry.clone());
}
if let Some(ref tool) = exp.tool_used {
keywords.push(tool.clone());
}
let entry = MemoryEntry {
uri,

View File

@@ -19,6 +19,34 @@ pub trait LlmDriverForExtraction: Send + Sync {
messages: &[Message],
extraction_type: MemoryType,
) -> Result<Vec<ExtractedMemory>>;
/// 单次 LLM 调用提取全部类型(记忆 + 经验 + 画像信号)
/// 默认实现:退化到 3 次独立调用experiences 和 profile_signals 为空)
async fn extract_combined_all(
&self,
messages: &[Message],
) -> Result<crate::types::CombinedExtraction> {
let mut combined = crate::types::CombinedExtraction::default();
for mt in [MemoryType::Preference, MemoryType::Knowledge, MemoryType::Experience] {
if let Ok(mems) = self.extract_memories(messages, mt).await {
combined.memories.extend(mems);
}
}
Ok(combined)
}
/// 使用自定义 prompt 进行单次 LLM 调用,返回原始文本响应
/// 用于统一提取场景,默认返回不支持错误
async fn extract_with_prompt(
&self,
_messages: &[Message],
_system_prompt: &str,
_user_prompt: &str,
) -> Result<String> {
Err(zclaw_types::ZclawError::Internal(
"extract_with_prompt not implemented".to_string(),
))
}
}
/// Memory Extractor - extracts memories from conversations
@@ -85,13 +113,10 @@ impl MemoryExtractor {
session_id: SessionId,
) -> Result<Vec<ExtractedMemory>> {
// Check if LLM driver is available
let _llm_driver = match &self.llm_driver {
Some(driver) => driver,
None => {
if self.llm_driver.is_none() {
tracing::debug!("[MemoryExtractor] No LLM driver configured, skipping extraction");
return Ok(Vec::new());
}
};
let mut results = Vec::new();
@@ -227,6 +252,299 @@ impl MemoryExtractor {
tracing::info!("[MemoryExtractor] Stored {} memories to OpenViking", stored);
Ok(stored)
}
/// 统一提取:单次 LLM 调用同时产出 memories + experiences + profile_signals
///
/// 优先使用 `extract_with_prompt()` 进行单次调用;若 driver 不支持则
/// 退化为 `extract()` + 从记忆推断经验/画像。
pub async fn extract_combined(
&self,
messages: &[Message],
session_id: SessionId,
) -> Result<crate::types::CombinedExtraction> {
let llm_driver = match &self.llm_driver {
Some(driver) => driver,
None => {
tracing::debug!(
"[MemoryExtractor] No LLM driver configured, skipping combined extraction"
);
return Ok(crate::types::CombinedExtraction::default());
}
};
// 尝试单次 LLM 调用路径
let system_prompt = "You are a memory extraction assistant. Analyze conversations and extract \
structured memories, experiences, and profile signals in valid JSON format. \
Always respond with valid JSON only, no additional text or markdown formatting.";
let user_prompt = format!(
"{}{}",
crate::extractor::prompts::COMBINED_EXTRACTION_PROMPT,
format_conversation_text(messages)
);
match llm_driver
.extract_with_prompt(messages, system_prompt, &user_prompt)
.await
{
Ok(raw_text) if !raw_text.trim().is_empty() => {
match parse_combined_response(&raw_text, session_id.clone()) {
Ok(combined) => {
tracing::info!(
"[MemoryExtractor] Combined extraction: {} memories, {} experiences, {} profile signals",
combined.memories.len(),
combined.experiences.len(),
combined.profile_signals.signal_count(),
);
return Ok(combined);
}
Err(e) => {
tracing::warn!(
"[MemoryExtractor] Combined response parse failed, falling back: {}",
e
);
}
}
}
Ok(_) => {
tracing::debug!("[MemoryExtractor] extract_with_prompt returned empty, falling back");
}
Err(e) => {
tracing::debug!(
"[MemoryExtractor] extract_with_prompt not supported ({}), falling back",
e
);
}
}
// 退化路径:使用已有的 extract() 然后推断 experiences 和 profile_signals
let memories = self.extract(messages, session_id).await?;
let experiences = infer_experiences_from_memories(&memories);
let profile_signals = infer_profile_signals_from_memories(&memories);
Ok(crate::types::CombinedExtraction {
memories,
experiences,
profile_signals,
})
}
}
/// 格式化对话消息为文本
fn format_conversation_text(messages: &[Message]) -> String {
messages
.iter()
.filter_map(|msg| match msg {
Message::User { content } => Some(format!("[User]: {}", content)),
Message::Assistant { content, .. } => Some(format!("[Assistant]: {}", content)),
Message::System { content } => Some(format!("[System]: {}", content)),
Message::ToolUse { .. } | Message::ToolResult { .. } => None,
})
.collect::<Vec<_>>()
.join("\n\n")
}
/// 从 LLM 原始响应解析 CombinedExtraction
pub fn parse_combined_response(
raw: &str,
session_id: SessionId,
) -> Result<crate::types::CombinedExtraction> {
use crate::types::CombinedExtraction;
let json_str = crate::json_utils::extract_json_block(raw);
let parsed: serde_json::Value = serde_json::from_str(json_str).map_err(|e| {
zclaw_types::ZclawError::Internal(format!("Failed to parse combined JSON: {}", e))
})?;
// 解析 memories
let memories = parsed
.get("memories")
.and_then(|v| v.as_array())
.map(|arr| {
arr.iter()
.filter_map(|item| parse_memory_item(item, &session_id))
.collect::<Vec<_>>()
})
.unwrap_or_default();
// 解析 experiences
let experiences = parsed
.get("experiences")
.and_then(|v| v.as_array())
.map(|arr| {
arr.iter()
.filter_map(parse_experience_item)
.collect::<Vec<_>>()
})
.unwrap_or_default();
// 解析 profile_signals
let profile_signals = parse_profile_signals(&parsed);
Ok(CombinedExtraction {
memories,
experiences,
profile_signals,
})
}
/// 解析单个 memory 项
fn parse_memory_item(
value: &serde_json::Value,
session_id: &SessionId,
) -> Option<ExtractedMemory> {
let content = value.get("content")?.as_str()?.to_string();
let category = value
.get("category")
.and_then(|v| v.as_str())
.unwrap_or("unknown")
.to_string();
let memory_type_str = value
.get("memory_type")
.and_then(|v| v.as_str())
.unwrap_or("knowledge");
let memory_type = crate::types::MemoryType::parse(memory_type_str);
let confidence = value
.get("confidence")
.and_then(|v| v.as_f64())
.unwrap_or(0.7) as f32;
let keywords = crate::json_utils::extract_string_array(value, "keywords");
Some(
ExtractedMemory::new(memory_type, category, content, session_id.clone())
.with_confidence(confidence)
.with_keywords(keywords),
)
}
/// 解析单个 experience 项
fn parse_experience_item(value: &serde_json::Value) -> Option<crate::types::ExperienceCandidate> {
use crate::types::Outcome;
let pain_pattern = value.get("pain_pattern")?.as_str()?.to_string();
let context = value
.get("context")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let solution_steps = crate::json_utils::extract_string_array(value, "solution_steps");
let outcome_str = value
.get("outcome")
.and_then(|v| v.as_str())
.unwrap_or("partial");
let outcome = match outcome_str {
"success" => Outcome::Success,
"failed" => Outcome::Failed,
_ => Outcome::Partial,
};
let confidence = value
.get("confidence")
.and_then(|v| v.as_f64())
.unwrap_or(0.6) as f32;
let tools_used = crate::json_utils::extract_string_array(value, "tools_used");
let industry_context = value
.get("industry_context")
.and_then(|v| v.as_str())
.map(String::from);
Some(crate::types::ExperienceCandidate {
pain_pattern,
context,
solution_steps,
outcome,
confidence,
tools_used,
industry_context,
})
}
/// 解析 profile_signals
fn parse_profile_signals(obj: &serde_json::Value) -> crate::types::ProfileSignals {
let signals = obj.get("profile_signals");
crate::types::ProfileSignals {
industry: signals
.and_then(|s| s.get("industry"))
.and_then(|v| v.as_str())
.map(String::from),
recent_topic: signals
.and_then(|s| s.get("recent_topic"))
.and_then(|v| v.as_str())
.map(String::from),
pain_point: signals
.and_then(|s| s.get("pain_point"))
.and_then(|v| v.as_str())
.map(String::from),
preferred_tool: signals
.and_then(|s| s.get("preferred_tool"))
.and_then(|v| v.as_str())
.map(String::from),
communication_style: signals
.and_then(|s| s.get("communication_style"))
.and_then(|v| v.as_str())
.map(String::from),
}
}
/// 从已有记忆推断结构化经验(退化路径)
fn infer_experiences_from_memories(
memories: &[ExtractedMemory],
) -> Vec<crate::types::ExperienceCandidate> {
memories
.iter()
.filter(|m| m.memory_type == crate::types::MemoryType::Experience)
.filter_map(|m| {
// 经验类记忆 → ExperienceCandidate
let content = &m.content;
if content.len() < 10 {
return None;
}
Some(crate::types::ExperienceCandidate {
pain_pattern: m.category.clone(),
context: content.clone(),
solution_steps: Vec::new(),
outcome: crate::types::Outcome::Partial,
confidence: m.confidence * 0.7, // 降低推断置信度
tools_used: m.keywords.clone(),
industry_context: None,
})
})
.collect()
}
/// 从已有记忆推断画像信号(退化路径)
fn infer_profile_signals_from_memories(
memories: &[ExtractedMemory],
) -> crate::types::ProfileSignals {
use crate::types::ProfileSignals;
let mut signals = ProfileSignals::default();
for m in memories {
match m.memory_type {
crate::types::MemoryType::Preference => {
if m.category.contains("style") || m.category.contains("风格") {
if signals.communication_style.is_none() {
signals.communication_style = Some(m.content.clone());
}
}
}
crate::types::MemoryType::Knowledge => {
if signals.recent_topic.is_none() && !m.keywords.is_empty() {
signals.recent_topic = Some(m.keywords.first().cloned().unwrap_or_default());
}
}
crate::types::MemoryType::Experience => {
for kw in &m.keywords {
if signals.preferred_tool.is_none()
&& m.content.contains(kw.as_str())
{
signals.preferred_tool = Some(kw.clone());
break;
}
}
}
_ => {}
}
}
signals
}
/// Default extraction prompts for LLM
@@ -243,6 +561,55 @@ pub mod prompts {
}
}
/// 统一提取 prompt — 单次 LLM 调用同时提取记忆、结构化经验、画像信号
pub const COMBINED_EXTRACTION_PROMPT: &str = r#"
分析以下对话,一次性提取三类信息。严格按 JSON 格式返回。
## 输出格式
```json
{
"memories": [
{
"memory_type": "preference|knowledge|experience",
"category": "分类标签",
"content": "记忆内容",
"confidence": 0.0-1.0,
"keywords": ["关键词"]
}
],
"experiences": [
{
"pain_pattern": "痛点模式简述",
"context": "问题发生的上下文",
"solution_steps": ["步骤1", "步骤2"],
"outcome": "success|partial|failed",
"confidence": 0.0-1.0,
"tools_used": ["使用的工具/技能"],
"industry_context": "行业标识(可选)"
}
],
"profile_signals": {
"industry": "用户所在行业(可选)",
"recent_topic": "最近讨论的主要话题(可选)",
"pain_point": "用户当前痛点(可选)",
"preferred_tool": "用户偏好的工具/技能(可选)",
"communication_style": "沟通风格: concise|detailed|formal|casual(可选)"
}
}
```
## 提取规则
1. **memories**: 提取用户偏好(沟通风格/格式/语言)、知识(事实/领域知识/经验教训)、使用经验(技能/工具使用模式和结果)
2. **experiences**: 仅提取明确的"问题→解决"模式要求有清晰的痛点和步骤confidence >= 0.6
3. **profile_signals**: 从对话中推断用户画像信息,只在有明确信号时填写,留空则不填
4. 每个字段都要有实际内容,不确定的宁可省略
5. 只返回 JSON不要附加其他文本
对话内容:
"#;
const PREFERENCE_EXTRACTION_PROMPT: &str = r#"
分析以下对话,提取用户的偏好设置。关注:
- 沟通风格偏好(简洁/详细、正式/随意)
@@ -362,11 +729,103 @@ mod tests {
assert!(!result.is_empty());
}
#[tokio::test]
async fn test_extract_combined_all_default_impl() {
let driver = MockLlmDriver;
let messages = vec![Message::user("Hello")];
let result = driver.extract_combined_all(&messages).await.unwrap();
assert_eq!(result.memories.len(), 3); // 3 types
}
#[test]
fn test_prompts_available() {
assert!(!prompts::get_extraction_prompt(MemoryType::Preference).is_empty());
assert!(!prompts::get_extraction_prompt(MemoryType::Knowledge).is_empty());
assert!(!prompts::get_extraction_prompt(MemoryType::Experience).is_empty());
assert!(!prompts::get_extraction_prompt(MemoryType::Session).is_empty());
assert!(!prompts::COMBINED_EXTRACTION_PROMPT.is_empty());
}
#[test]
fn test_parse_combined_response_full() {
let raw = r#"```json
{
"memories": [
{
"memory_type": "preference",
"category": "communication-style",
"content": "用户偏好简洁回复",
"confidence": 0.9,
"keywords": ["简洁", "风格"]
},
{
"memory_type": "knowledge",
"category": "user-facts",
"content": "用户是医院行政人员",
"confidence": 0.85,
"keywords": ["医院", "行政"]
}
],
"experiences": [
{
"pain_pattern": "报表生成耗时",
"context": "月度报表需要手动汇总多个Excel",
"solution_steps": ["使用researcher工具自动抓取", "格式化输出为Excel"],
"outcome": "success",
"confidence": 0.85,
"tools_used": ["researcher"],
"industry_context": "healthcare"
}
],
"profile_signals": {
"industry": "healthcare",
"recent_topic": "报表自动化",
"pain_point": "手动汇总Excel太慢",
"preferred_tool": "researcher",
"communication_style": "concise"
}
}
```"#;
let result = super::parse_combined_response(raw, SessionId::new()).unwrap();
assert_eq!(result.memories.len(), 2);
assert_eq!(result.experiences.len(), 1);
assert_eq!(result.experiences[0].pain_pattern, "报表生成耗时");
assert_eq!(result.experiences[0].outcome, crate::types::Outcome::Success);
assert_eq!(result.profile_signals.industry.as_deref(), Some("healthcare"));
assert_eq!(result.profile_signals.pain_point.as_deref(), Some("手动汇总Excel太慢"));
assert!(result.profile_signals.has_any_signal());
}
#[test]
fn test_parse_combined_response_minimal() {
let raw = r#"{"memories": [], "experiences": [], "profile_signals": {}}"#;
let result = super::parse_combined_response(raw, SessionId::new()).unwrap();
assert!(result.memories.is_empty());
assert!(result.experiences.is_empty());
assert!(!result.profile_signals.has_any_signal());
}
#[test]
fn test_parse_combined_response_invalid() {
let raw = "not json at all";
let result = super::parse_combined_response(raw, SessionId::new());
assert!(result.is_err());
}
#[tokio::test]
async fn test_extract_combined_fallback() {
// MockLlmDriver doesn't implement extract_with_prompt, so it falls back
let driver = Arc::new(MockLlmDriver);
let extractor = MemoryExtractor::new(driver);
let messages = vec![Message::user("Hello"), Message::assistant("Hi there!")];
let result = extractor
.extract_combined(&messages, SessionId::new())
.await
.unwrap();
// Fallback: extract() produces 3 memories, infer produces experiences from them
assert!(!result.memories.is_empty());
}
}

View File

@@ -0,0 +1,448 @@
//! 反馈信号收集与信任度管理Phase 5 反馈闭环)
//! 收集用户对进化产物(技能/Pipeline的显式/隐式反馈
//! 管理信任度衰减和优化循环
//! 信任度记录通过 VikingAdapter 持久化
use std::collections::HashMap;
use std::sync::Arc;
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use crate::types::MemoryType;
use crate::viking_adapter::VikingAdapter;
/// 反馈信号类型
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum FeedbackSignal {
/// 用户直接表达的意见
Explicit,
/// 从使用行为推断
ImplicitUsage,
/// 使用频率
UsageCount,
/// 任务完成率
CompletionRate,
}
/// 情感倾向
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum Sentiment {
Positive,
Negative,
Neutral,
}
/// 进化产物类型
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum EvolutionArtifact {
Skill,
Pipeline,
}
/// 单条反馈记录
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct FeedbackEntry {
pub artifact_id: String,
pub artifact_type: EvolutionArtifact,
pub signal: FeedbackSignal,
pub sentiment: Sentiment,
pub details: Option<String>,
pub timestamp: DateTime<Utc>,
}
/// 信任度记录
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TrustRecord {
pub artifact_id: String,
pub artifact_type: EvolutionArtifact,
pub trust_score: f32,
pub total_feedback: u32,
pub positive_count: u32,
pub negative_count: u32,
pub last_updated: DateTime<Utc>,
}
/// 反馈收集器
/// 管理反馈记录和信任度评分
/// 通过 VikingAdapter 持久化信任度记录(可选)
pub struct FeedbackCollector {
trust_records: HashMap<String, TrustRecord>,
viking: Option<Arc<VikingAdapter>>,
/// 是否已从持久化存储加载信任度记录
loaded: bool,
}
impl FeedbackCollector {
pub fn new() -> Self {
Self {
trust_records: HashMap::new(),
viking: None,
loaded: false,
}
}
/// 创建带 VikingAdapter 的 FeedbackCollector
pub fn with_viking(viking: Arc<VikingAdapter>) -> Self {
Self {
trust_records: HashMap::new(),
viking: Some(viking),
loaded: false,
}
}
/// 从 VikingAdapter 加载已持久化的信任度记录
pub async fn load(&mut self) -> Result<usize, String> {
let viking = match &self.viking {
Some(v) => v,
None => return Ok(0),
};
// MemoryEntry::new("feedback", Session, artifact_id) 生成
// URI: agent://feedback/sessions/{artifact_id}
let entries = viking
.find_by_prefix("agent://feedback/sessions/")
.await
.map_err(|e| format!("Failed to load trust records: {}", e))?;
let mut count = 0;
for entry in entries {
match serde_json::from_str::<TrustRecord>(&entry.content) {
Ok(record) => {
// 只合并不覆盖:保留内存中的较新记录
self.trust_records
.entry(record.artifact_id.clone())
.or_insert(record);
count += 1;
}
Err(e) => {
tracing::warn!(
"[FeedbackCollector] Failed to deserialize trust record at {}: {}",
entry.uri,
e
);
}
}
}
tracing::debug!(
"[FeedbackCollector] Loaded {} trust records from storage",
count
);
Ok(count)
}
/// 将信任度记录持久化到 VikingAdapter
/// 首次调用时自动从存储加载已有记录,避免覆盖
pub async fn save(&mut self) -> Result<usize, String> {
// 首次保存前自动加载已有记录,防止丢失历史数据
if !self.loaded {
match self.load().await {
Ok(_) => {
self.loaded = true;
}
Err(e) => {
// 加载失败时保留 loaded=false下次 save 会重试
tracing::warn!(
"[FeedbackCollector] Auto-load before save failed, will retry next save: {}",
e
);
}
}
}
let viking = match &self.viking {
Some(v) => v,
None => return Ok(0),
};
let mut saved = 0;
for record in self.trust_records.values() {
let content = match serde_json::to_string(record) {
Ok(c) => c,
Err(e) => {
tracing::warn!(
"[FeedbackCollector] Failed to serialize trust record {}: {}",
record.artifact_id,
e
);
continue;
}
};
let entry = crate::types::MemoryEntry::new(
"feedback",
MemoryType::Session,
&record.artifact_id,
content,
)
.with_importance((record.trust_score * 10.0) as u8);
match viking.store(&entry).await {
Ok(_) => saved += 1,
Err(e) => {
tracing::warn!(
"[FeedbackCollector] Failed to save trust record {}: {}",
record.artifact_id,
e
);
}
}
}
tracing::debug!(
"[FeedbackCollector] Saved {} trust records to storage",
saved
);
Ok(saved)
}
/// 提交一条反馈
pub fn submit_feedback(&mut self, entry: FeedbackEntry) -> TrustUpdate {
let record = self
.trust_records
.entry(entry.artifact_id.clone())
.or_insert_with(|| TrustRecord {
artifact_id: entry.artifact_id.clone(),
artifact_type: entry.artifact_type.clone(),
trust_score: 0.5,
total_feedback: 0,
positive_count: 0,
negative_count: 0,
last_updated: Utc::now(),
});
// 更新计数
record.total_feedback += 1;
match entry.sentiment {
Sentiment::Positive => record.positive_count += 1,
Sentiment::Negative => record.negative_count += 1,
Sentiment::Neutral => {}
}
// 重新计算信任度
let old_score = record.trust_score;
record.trust_score = Self::calculate_trust_internal(
record.positive_count,
record.negative_count,
record.total_feedback,
record.last_updated,
);
record.last_updated = Utc::now();
let new_score = record.trust_score;
let total = record.total_feedback;
let action = Self::recommend_action_internal(new_score, total);
TrustUpdate {
artifact_id: entry.artifact_id.clone(),
old_score,
new_score,
action,
}
}
/// 获取信任度记录
pub fn get_trust(&self, artifact_id: &str) -> Option<&TrustRecord> {
self.trust_records.get(artifact_id)
}
/// 获取所有需要优化的产物(信任度 < 0.4
pub fn get_artifacts_needing_optimization(&self) -> Vec<&TrustRecord> {
self.trust_records
.values()
.filter(|r| r.trust_score < 0.4 && r.total_feedback >= 2)
.collect()
}
/// 获取所有应该归档的产物(信任度 < 0.2 且反馈 >= 5
pub fn get_artifacts_to_archive(&self) -> Vec<&TrustRecord> {
self.trust_records
.values()
.filter(|r| r.trust_score < 0.2 && r.total_feedback >= 5)
.collect()
}
/// 获取所有高信任产物(信任度 >= 0.8
pub fn get_recommended_artifacts(&self) -> Vec<&TrustRecord> {
self.trust_records
.values()
.filter(|r| r.trust_score >= 0.8)
.collect()
}
fn calculate_trust_internal(
positive: u32,
negative: u32,
total: u32,
last_updated: DateTime<Utc>,
) -> f32 {
if total == 0 {
return 0.5;
}
let positive_ratio = positive as f32 / total as f32;
let negative_penalty = negative as f32 * 0.1;
let days_since = (Utc::now() - last_updated).num_days().max(0) as f32;
let time_decay = 1.0 - (days_since * 0.005).min(0.5);
(positive_ratio * time_decay - negative_penalty).clamp(0.0, 1.0)
}
fn recommend_action_internal(trust_score: f32, total_feedback: u32) -> RecommendedAction {
if trust_score >= 0.8 {
RecommendedAction::Promote
} else if trust_score < 0.2 && total_feedback >= 5 {
RecommendedAction::Archive
} else if trust_score < 0.4 && total_feedback >= 2 {
RecommendedAction::Optimize
} else {
RecommendedAction::Monitor
}
}
}
impl Default for FeedbackCollector {
fn default() -> Self {
Self::new()
}
}
/// 信任度更新结果
#[derive(Debug, Clone)]
pub struct TrustUpdate {
pub artifact_id: String,
pub old_score: f32,
pub new_score: f32,
pub action: RecommendedAction,
}
/// 建议动作
#[derive(Debug, Clone, PartialEq)]
pub enum RecommendedAction {
/// 继续观察
Monitor,
/// 需要优化
Optimize,
/// 建议归档(降级为记忆)
Archive,
/// 建议提升为推荐技能
Promote,
}
#[cfg(test)]
mod tests {
use super::*;
fn make_feedback(artifact_id: &str, sentiment: Sentiment) -> FeedbackEntry {
FeedbackEntry {
artifact_id: artifact_id.to_string(),
artifact_type: EvolutionArtifact::Skill,
signal: FeedbackSignal::Explicit,
sentiment,
details: None,
timestamp: Utc::now(),
}
}
#[test]
fn test_initial_trust() {
let collector = FeedbackCollector::new();
assert!(collector.get_trust("skill-1").is_none());
}
#[test]
fn test_positive_feedback_increases_trust() {
let mut collector = FeedbackCollector::new();
collector.submit_feedback(make_feedback("skill-1", Sentiment::Positive));
let record = collector.get_trust("skill-1").unwrap();
assert!(record.trust_score > 0.5);
assert_eq!(record.positive_count, 1);
}
#[test]
fn test_negative_feedback_decreases_trust() {
let mut collector = FeedbackCollector::new();
collector.submit_feedback(make_feedback("skill-1", Sentiment::Negative));
let record = collector.get_trust("skill-1").unwrap();
assert!(record.trust_score < 0.5);
}
#[test]
fn test_mixed_feedback() {
let mut collector = FeedbackCollector::new();
collector.submit_feedback(make_feedback("skill-1", Sentiment::Positive));
collector.submit_feedback(make_feedback("skill-1", Sentiment::Positive));
collector.submit_feedback(make_feedback("skill-1", Sentiment::Negative));
let record = collector.get_trust("skill-1").unwrap();
assert_eq!(record.total_feedback, 3);
assert!(record.trust_score > 0.3); // 2/3 positive
}
#[test]
fn test_recommend_optimize() {
let mut collector = FeedbackCollector::new();
collector.submit_feedback(make_feedback("skill-1", Sentiment::Negative));
let update = collector.submit_feedback(make_feedback("skill-1", Sentiment::Negative));
assert_eq!(update.action, RecommendedAction::Optimize);
}
#[test]
fn test_needs_optimization_filter() {
let mut collector = FeedbackCollector::new();
collector.submit_feedback(make_feedback("bad-skill", Sentiment::Negative));
collector.submit_feedback(make_feedback("bad-skill", Sentiment::Negative));
collector.submit_feedback(make_feedback("good-skill", Sentiment::Positive));
let needs = collector.get_artifacts_needing_optimization();
assert_eq!(needs.len(), 1);
assert_eq!(needs[0].artifact_id, "bad-skill");
}
#[test]
fn test_promote_recommendation() {
let mut collector = FeedbackCollector::new();
for _ in 0..5 {
collector.submit_feedback(make_feedback("great-skill", Sentiment::Positive));
}
let recommended = collector.get_recommended_artifacts();
assert_eq!(recommended.len(), 1);
}
#[tokio::test]
async fn test_save_and_load_roundtrip() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
// 写入阶段
let mut collector = FeedbackCollector::with_viking(viking.clone());
collector.submit_feedback(make_feedback("skill-a", Sentiment::Positive));
collector.submit_feedback(make_feedback("skill-a", Sentiment::Positive));
collector.submit_feedback(make_feedback("skill-b", Sentiment::Negative));
let saved = collector.save().await.unwrap();
assert_eq!(saved, 2); // 2 个 artifact
// 读取阶段:新 collector 从存储加载
let mut collector2 = FeedbackCollector::with_viking(viking);
let loaded = collector2.load().await.unwrap();
assert_eq!(loaded, 2);
let record_a = collector2.get_trust("skill-a").unwrap();
assert_eq!(record_a.positive_count, 2);
assert_eq!(record_a.total_feedback, 2);
let record_b = collector2.get_trust("skill-b").unwrap();
assert_eq!(record_b.negative_count, 1);
}
#[tokio::test]
async fn test_load_without_viking_returns_zero() {
let mut collector = FeedbackCollector::new();
let loaded = collector.load().await.unwrap();
assert_eq!(loaded, 0);
}
#[tokio::test]
async fn test_save_without_viking_returns_zero() {
let mut collector = FeedbackCollector::new();
let saved = collector.save().await.unwrap();
assert_eq!(saved, 0);
}
}

View File

@@ -0,0 +1,148 @@
//! 共享 JSON 工具函数
//! 从 LLM 返回的文本中提取 JSON 块
/// 从 LLM 返回文本中提取 JSON 块
/// 支持三种格式:```json...``` 围栏、```...``` 围栏、裸 {...}
/// 使用括号平衡算法找到第一个完整 JSON 块,避免误匹配
pub fn extract_json_block(text: &str) -> &str {
// 尝试匹配 ```json ... ```
if let Some(start) = text.find("```json") {
let json_start = start + 7;
if let Some(end) = text[json_start..].find("```") {
return text[json_start..json_start + end].trim();
}
}
// 尝试匹配 ``` ... ```
if let Some(start) = text.find("```") {
let json_start = start + 3;
if let Some(end) = text[json_start..].find("```") {
return text[json_start..json_start + end].trim();
}
}
// 用括号平衡算法找第一个完整 {...} 块
if let Some(slice) = find_balanced_json(text) {
return slice;
}
text.trim()
}
/// 使用括号平衡计数找到第一个完整的 {...} JSON 块
/// 正确处理字符串字面量中的花括号
fn find_balanced_json(text: &str) -> Option<&str> {
let start = text.find('{')?;
let mut depth = 0i32;
let mut in_string = false;
let mut escape_next = false;
for (i, c) in text[start..].char_indices() {
if escape_next {
escape_next = false;
continue;
}
match c {
'\\' if in_string => escape_next = true,
'"' => in_string = !in_string,
'{' if !in_string => {
depth += 1;
}
'}' if !in_string => {
depth -= 1;
if depth == 0 {
return Some(&text[start..=start + i]);
}
}
_ => {}
}
}
None
}
/// 从 serde_json::Value 中提取字符串数组
/// 用于解析 LLM 返回 JSON 中的 triggers/tools 等字段
pub fn extract_string_array(raw: &serde_json::Value, key: &str) -> Vec<String> {
raw.get(key)
.and_then(|v| v.as_array())
.map(|a| {
a.iter()
.filter_map(|v| v.as_str().map(String::from))
.collect()
})
.unwrap_or_default()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_json_block_with_markdown() {
let text = "Here is the result:\n```json\n{\"key\": \"value\"}\n```\nDone.";
assert_eq!(extract_json_block(text), "{\"key\": \"value\"}");
}
#[test]
fn test_json_block_bare() {
let text = "{\"key\": \"value\"}";
assert_eq!(extract_json_block(text), "{\"key\": \"value\"}");
}
#[test]
fn test_json_block_plain_fences() {
let text = "Result:\n```\n{\"a\": 1}\n```";
assert_eq!(extract_json_block(text), "{\"a\": 1}");
}
#[test]
fn test_json_block_nested_braces() {
let text = r#"{"outer": {"inner": "val"}}"#;
assert_eq!(extract_json_block(text), r#"{"outer": {"inner": "val"}}"#);
}
#[test]
fn test_json_block_no_json() {
let text = "no json here";
assert_eq!(extract_json_block(text), "no json here");
}
#[test]
fn test_balanced_json_skips_outer_text() {
// 第一个 { 到最后一个 } 会包含多余文本,但平衡算法只取第一个完整块
let text = "prefix {\"a\": 1} suffix {\"b\": 2}";
assert_eq!(extract_json_block(text), "{\"a\": 1}");
}
#[test]
fn test_balanced_json_handles_braces_in_strings() {
let text = r#"{"body": "function() { return x; }", "name": "test"}"#;
assert_eq!(
extract_json_block(text),
r#"{"body": "function() { return x; }", "name": "test"}"#
);
}
#[test]
fn test_balanced_json_handles_escaped_quotes() {
let text = r#"{"msg": "He said \"hello {world}\""}"#;
assert_eq!(
extract_json_block(text),
r#"{"msg": "He said \"hello {world}\""}"#
);
}
#[test]
fn test_extract_string_array() {
let raw: serde_json::Value = serde_json::from_str(
r#"{"triggers": ["报表", "日报"], "name": "test"}"#,
)
.unwrap();
let arr = extract_string_array(&raw, "triggers");
assert_eq!(arr, vec!["报表", "日报"]);
}
#[test]
fn test_extract_string_array_missing_key() {
let raw: serde_json::Value = serde_json::from_str(r#"{"name": "test"}"#).unwrap();
let arr = extract_string_array(&raw, "triggers");
assert!(arr.is_empty());
}
}

View File

@@ -5,10 +5,13 @@
//!
//! # Architecture
//!
//! The growth system consists of four main components:
//! The growth system consists of several subsystems:
//!
//! ## Memory Pipeline (L0-L2)
//!
//! 1. **MemoryExtractor** (`extractor`) - Analyzes conversations and extracts
//! preferences, knowledge, and experience using LLM.
//! preferences, knowledge, and experience using LLM. Supports combined extraction
//! (single LLM call for memories + experiences + profile signals).
//!
//! 2. **MemoryRetriever** (`retriever`) - Performs semantic search over
//! stored memories to find contextually relevant information.
@@ -19,6 +22,28 @@
//! 4. **GrowthTracker** (`tracker`) - Tracks growth metrics and evolution
//! over time.
//!
//! ## Evolution Engine (L1-L3)
//!
//! 5. **ExperienceStore** (`experience_store`) - FTS5-backed structured experience storage.
//!
//! 6. **PatternAggregator** (`pattern_aggregator`) - Collects high-frequency patterns for L2.
//!
//! 7. **SkillGenerator** (`skill_generator`) - LLM-driven SKILL.md content generation.
//!
//! 8. **QualityGate** (`quality_gate`) - Validates candidate skills (confidence, conflicts).
//!
//! 9. **EvolutionEngine** (`evolution_engine`) - Orchestrates L1/L2/L3 evolution phases.
//!
//! 10. **WorkflowComposer** (`workflow_composer`) - Extracts tool chain patterns for Pipeline YAML.
//!
//! 11. **FeedbackCollector** (`feedback_collector`) - Trust score management with decay.
//!
//! ## Support Modules
//!
//! 12. **VikingAdapter** (`viking_adapter`) - Storage abstraction (in-memory + SQLite backends).
//! 13. **Summarizer** (`summarizer`) - L0/L1 summary generation.
//! 14. **JsonUtils** (`json_utils`) - Shared JSON parsing utilities.
//!
//! # Storage
//!
//! All memories are stored in OpenViking with a URI structure:
@@ -65,6 +90,15 @@ pub mod storage;
pub mod retrieval;
pub mod summarizer;
pub mod experience_store;
pub mod json_utils;
pub mod experience_extractor;
pub mod profile_updater;
pub mod pattern_aggregator;
pub mod skill_generator;
pub mod quality_gate;
pub mod evolution_engine;
pub mod workflow_composer;
pub mod feedback_collector;
// Re-export main types for convenience
pub use types::{
@@ -78,6 +112,14 @@ pub use types::{
RetrievalResult,
UriBuilder,
effective_importance,
ArtifactType,
CombinedExtraction,
EvolutionEvent,
EvolutionEventType,
EvolutionStatus,
ExperienceCandidate,
Outcome,
ProfileSignals,
};
pub use extractor::{LlmDriverForExtraction, MemoryExtractor};
@@ -89,6 +131,18 @@ pub use storage::SqliteStorage;
pub use experience_store::{Experience, ExperienceStore};
pub use retrieval::{EmbeddingClient, MemoryCache, QueryAnalyzer, SemanticScorer};
pub use summarizer::SummaryLlmDriver;
pub use experience_extractor::ExperienceExtractor;
pub use json_utils::{extract_json_block, extract_string_array};
pub use profile_updater::{ProfileFieldUpdate, ProfileUpdateKind, UserProfileUpdater};
pub use pattern_aggregator::{AggregatedPattern, PatternAggregator};
pub use skill_generator::{SkillCandidate, SkillGenerator};
pub use quality_gate::{QualityGate, QualityReport};
pub use evolution_engine::{EvolutionConfig, EvolutionEngine};
pub use workflow_composer::{PipelineCandidate, ToolChainPattern, WorkflowComposer};
pub use feedback_collector::{
EvolutionArtifact, FeedbackCollector, FeedbackEntry, FeedbackSignal,
RecommendedAction, Sentiment, TrustRecord, TrustUpdate,
};
/// Growth system configuration
#[derive(Debug, Clone)]

View File

@@ -0,0 +1,245 @@
//! 经验模式聚合器
//! 收集同一 pain_pattern 下的所有 Experience找出共同步骤
//! 用于 L2 技能进化触发判断
use std::collections::HashMap;
use crate::experience_store::{Experience, ExperienceStore};
use zclaw_types::Result;
/// 聚合后的经验模式
#[derive(Debug, Clone)]
pub struct AggregatedPattern {
pub pain_pattern: String,
pub experiences: Vec<Experience>,
pub common_steps: Vec<String>,
pub total_reuse: u32,
pub tools_used: Vec<String>,
pub industry_context: Option<String>,
}
/// 经验模式聚合器
/// 从 ExperienceStore 中收集高频复用的模式,作为 L2 技能生成的输入
pub struct PatternAggregator {
store: ExperienceStore,
}
impl PatternAggregator {
pub fn new(store: ExperienceStore) -> Self {
Self { store }
}
/// 查找可固化的模式reuse_count >= threshold 的经验
pub async fn find_evolvable_patterns(
&self,
agent_id: &str,
min_reuse: u32,
) -> Result<Vec<AggregatedPattern>> {
let all = self.store.find_by_agent(agent_id).await?;
let mut grouped: HashMap<String, Vec<Experience>> = HashMap::new();
for exp in all {
if exp.reuse_count >= min_reuse {
grouped
.entry(exp.pain_pattern.clone())
.or_default()
.push(exp);
}
}
let mut patterns = Vec::new();
for (pattern, experiences) in grouped {
let total_reuse: u32 = experiences.iter().map(|e| e.reuse_count).sum();
let common_steps = Self::find_common_steps(&experiences);
// 从 tool_used 字段提取工具名
let tools: Vec<String> = experiences
.iter()
.filter_map(|e| e.tool_used.clone())
.filter(|s| !s.is_empty())
.collect::<std::collections::HashSet<_>>()
.into_iter()
.collect();
let industry = experiences
.iter()
.filter_map(|e| e.industry_context.clone())
.next();
patterns.push(AggregatedPattern {
pain_pattern: pattern,
experiences,
common_steps,
total_reuse,
tools_used: tools,
industry_context: industry,
});
}
// 按 reuse 排序
patterns.sort_by(|a, b| b.total_reuse.cmp(&a.total_reuse));
Ok(patterns)
}
/// 找出多条经验中共同的解决步骤
fn find_common_steps(experiences: &[Experience]) -> Vec<String> {
if experiences.is_empty() {
return Vec::new();
}
if experiences.len() == 1 {
return experiences[0].solution_steps.clone();
}
// 取所有经验的交集步骤
let mut step_counts: HashMap<String, u32> = HashMap::new();
for exp in experiences {
for step in &exp.solution_steps {
*step_counts.entry(step.clone()).or_insert(0) += 1;
}
}
let threshold = experiences.len() as f32 * 0.5; // 出现在 50%+ 的经验中
let mut common: Vec<_> = step_counts
.into_iter()
.filter(|(_, count)| (*count as f32) >= threshold)
.map(|(step, _)| step)
.collect();
common.dedup();
common
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Arc;
#[test]
fn test_find_common_steps_empty() {
let steps = PatternAggregator::find_common_steps(&[]);
assert!(steps.is_empty());
}
#[test]
fn test_find_common_steps_single() {
let exp = Experience::new(
"a",
"packaging",
"ctx",
vec!["step1".into(), "step2".into()],
"ok",
);
let steps = PatternAggregator::find_common_steps(&[exp]);
assert_eq!(steps.len(), 2);
}
#[test]
fn test_find_common_steps_multiple() {
let exp1 = Experience::new(
"a",
"packaging",
"ctx",
vec!["step1".into(), "step2".into(), "step3".into()],
"ok",
);
let exp2 = Experience::new(
"a",
"packaging",
"ctx",
vec!["step1".into(), "step2".into(), "step4".into()],
"ok",
);
// step1 and step2 appear in both (100% >= 50%)
let steps = PatternAggregator::find_common_steps(&[exp1, exp2]);
assert!(steps.contains(&"step1".to_string()));
assert!(steps.contains(&"step2".to_string()));
}
#[tokio::test]
async fn test_find_evolvable_patterns_filters_low_reuse() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let store = ExperienceStore::new(viking);
// 经验 1: reuse_count = 0 (低于阈值)
let mut exp_low = Experience::new(
"agent-1",
"low reuse task",
"ctx",
vec!["step".into()],
"ok",
);
exp_low.reuse_count = 0;
store.store_experience(&exp_low).await.unwrap();
// 经验 2: reuse_count = 5 (高于阈值)
let mut exp_high = Experience::new(
"agent-1",
"high reuse task",
"ctx",
vec!["step1".into()],
"ok",
);
exp_high.reuse_count = 5;
store.store_experience(&exp_high).await.unwrap();
let aggregator = PatternAggregator::new(store);
let patterns = aggregator.find_evolvable_patterns("agent-1", 3).await.unwrap();
assert_eq!(patterns.len(), 1);
assert_eq!(patterns[0].pain_pattern, "high reuse task");
assert_eq!(patterns[0].total_reuse, 5);
}
#[tokio::test]
async fn test_find_evolvable_patterns_groups_by_pain() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let store = ExperienceStore::new(viking);
let mut exp1 = Experience::new(
"agent-1",
"report generation",
"ctx1",
vec!["query db".into(), "format".into()],
"ok",
);
exp1.reuse_count = 3;
store.store_experience(&exp1).await.unwrap();
// Same pain_pattern → same URI → overwrites, so use a slightly different hash
// Actually since URI is deterministic on pain_pattern, we can only have one per pattern
// This is by design: one experience per pain_pattern (latest wins)
let patterns = aggregator_fixtures::make_patterns_with_same_pain().await;
assert_eq!(patterns.len(), 1);
}
mod aggregator_fixtures {
use super::*;
pub async fn make_patterns_with_same_pain() -> Vec<AggregatedPattern> {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let store = ExperienceStore::new(viking);
let mut exp = Experience::new(
"agent-1",
"report generation",
"ctx1",
vec!["query db".into(), "format".into()],
"ok",
);
exp.reuse_count = 3;
store.store_experience(&exp).await.unwrap();
let aggregator = PatternAggregator::new(store);
aggregator.find_evolvable_patterns("agent-1", 2).await.unwrap()
}
}
#[tokio::test]
async fn test_find_evolvable_patterns_empty() {
let viking = Arc::new(crate::VikingAdapter::in_memory());
let store = ExperienceStore::new(viking);
let aggregator = PatternAggregator::new(store);
let patterns = aggregator.find_evolvable_patterns("unknown-agent", 3).await.unwrap();
assert!(patterns.is_empty());
}
}

View File

@@ -0,0 +1,157 @@
//! 用户画像增量更新器
//! 从 CombinedExtraction 的 profile_signals 提取需要更新的字段
//! 不额外调用 LLM纯规则驱动
use crate::types::CombinedExtraction;
/// 更新类型:字段覆盖 vs 数组追加
#[derive(Debug, Clone, PartialEq)]
pub enum ProfileUpdateKind {
/// 直接覆盖字段值industry, communication_style
SetField,
/// 追加到 JSON 数组字段recent_topic, pain_point, preferred_tool
AppendArray,
}
/// 待更新的画像字段
#[derive(Debug, Clone, PartialEq)]
pub struct ProfileFieldUpdate {
pub field: String,
pub value: String,
pub kind: ProfileUpdateKind,
}
/// 用户画像更新器
/// 从 CombinedExtraction 的 profile_signals 中提取需更新的字段列表
/// 调用方zclaw-runtime负责实际写入 UserProfileStore
pub struct UserProfileUpdater;
impl UserProfileUpdater {
pub fn new() -> Self {
Self
}
/// 从提取结果中收集需要更新的画像字段
/// 返回 (field, value, kind) 列表,由调用方根据 kind 选择写入方式
pub fn collect_updates(
&self,
extraction: &CombinedExtraction,
) -> Vec<ProfileFieldUpdate> {
let signals = &extraction.profile_signals;
let mut updates = Vec::new();
if let Some(ref industry) = signals.industry {
updates.push(ProfileFieldUpdate {
field: "industry".to_string(),
value: industry.clone(),
kind: ProfileUpdateKind::SetField,
});
}
if let Some(ref style) = signals.communication_style {
updates.push(ProfileFieldUpdate {
field: "communication_style".to_string(),
value: style.clone(),
kind: ProfileUpdateKind::SetField,
});
}
if let Some(ref topic) = signals.recent_topic {
updates.push(ProfileFieldUpdate {
field: "recent_topic".to_string(),
value: topic.clone(),
kind: ProfileUpdateKind::AppendArray,
});
}
if let Some(ref pain) = signals.pain_point {
updates.push(ProfileFieldUpdate {
field: "pain_point".to_string(),
value: pain.clone(),
kind: ProfileUpdateKind::AppendArray,
});
}
if let Some(ref tool) = signals.preferred_tool {
updates.push(ProfileFieldUpdate {
field: "preferred_tool".to_string(),
value: tool.clone(),
kind: ProfileUpdateKind::AppendArray,
});
}
updates
}
}
impl Default for UserProfileUpdater {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_collect_updates_industry() {
let mut extraction = CombinedExtraction::default();
extraction.profile_signals.industry = Some("healthcare".to_string());
let updater = UserProfileUpdater::new();
let updates = updater.collect_updates(&extraction);
assert_eq!(updates.len(), 1);
assert_eq!(updates[0].field, "industry");
assert_eq!(updates[0].value, "healthcare");
assert_eq!(updates[0].kind, ProfileUpdateKind::SetField);
}
#[test]
fn test_collect_updates_no_signals() {
let extraction = CombinedExtraction::default();
let updater = UserProfileUpdater::new();
let updates = updater.collect_updates(&extraction);
assert!(updates.is_empty());
}
#[test]
fn test_collect_updates_multiple_signals() {
let mut extraction = CombinedExtraction::default();
extraction.profile_signals.industry = Some("ecommerce".to_string());
extraction.profile_signals.communication_style = Some("concise".to_string());
let updater = UserProfileUpdater::new();
let updates = updater.collect_updates(&extraction);
assert_eq!(updates.len(), 2);
}
#[test]
fn test_collect_updates_all_five_dimensions() {
let mut extraction = CombinedExtraction::default();
extraction.profile_signals.industry = Some("healthcare".to_string());
extraction.profile_signals.communication_style = Some("concise".to_string());
extraction.profile_signals.recent_topic = Some("报表自动化".to_string());
extraction.profile_signals.pain_point = Some("手动汇总太慢".to_string());
extraction.profile_signals.preferred_tool = Some("researcher".to_string());
let updater = UserProfileUpdater::new();
let updates = updater.collect_updates(&extraction);
assert_eq!(updates.len(), 5);
let set_fields: Vec<_> = updates
.iter()
.filter(|u| u.kind == ProfileUpdateKind::SetField)
.map(|u| u.field.as_str())
.collect();
let append_fields: Vec<_> = updates
.iter()
.filter(|u| u.kind == ProfileUpdateKind::AppendArray)
.map(|u| u.field.as_str())
.collect();
assert_eq!(set_fields, vec!["industry", "communication_style"]);
assert_eq!(append_fields, vec!["recent_topic", "pain_point", "preferred_tool"]);
}
}

View File

@@ -0,0 +1,160 @@
//! 质量门控
//! 验证生成的技能/工作流是否满足质量标准
//! 包括:置信度阈值、触发词冲突检查、格式校验
use crate::skill_generator::SkillCandidate;
/// 质量验证报告
#[derive(Debug, Clone)]
pub struct QualityReport {
pub passed: bool,
pub issues: Vec<String>,
pub confidence: f32,
}
/// 质量门控验证器
pub struct QualityGate {
min_confidence: f32,
existing_triggers: Vec<String>,
}
impl QualityGate {
pub fn new(min_confidence: f32, existing_triggers: Vec<String>) -> Self {
Self {
min_confidence,
existing_triggers,
}
}
/// 验证技能候选项
pub fn validate_skill(&self, candidate: &SkillCandidate) -> QualityReport {
let mut issues = Vec::new();
// 1. 置信度检查
if candidate.confidence < self.min_confidence {
issues.push(format!(
"置信度 {:.2} 低于阈值 {:.2}",
candidate.confidence, self.min_confidence
));
}
// 2. 名称非空
if candidate.name.trim().is_empty() {
issues.push("技能名称不能为空".to_string());
}
// 3. 至少一个触发词
if candidate.triggers.is_empty() {
issues.push("至少需要一个触发词".to_string());
}
// 4. 触发词不与现有技能冲突
let conflicts: Vec<_> = candidate
.triggers
.iter()
.filter(|t| self.existing_triggers.iter().any(|et| et == *t))
.collect();
if !conflicts.is_empty() {
issues.push(format!("触发词冲突: {:?}", conflicts));
}
// 5. SKILL.md 正文非空
if candidate.body_markdown.trim().is_empty() {
issues.push("技能正文不能为空".to_string());
}
QualityReport {
passed: issues.is_empty(),
issues,
confidence: candidate.confidence,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_valid_candidate() -> SkillCandidate {
SkillCandidate {
name: "每日报表".to_string(),
description: "生成每日报表".to_string(),
triggers: vec!["报表".to_string(), "日报".to_string()],
tools: vec!["researcher".to_string()],
body_markdown: "# 每日报表\n步骤1\n步骤2".to_string(),
source_pattern: "报表生成".to_string(),
confidence: 0.85,
version: 1,
}
}
#[test]
fn test_validate_valid_skill() {
let gate = QualityGate::new(0.7, vec!["搜索".to_string()]);
let candidate = make_valid_candidate();
let report = gate.validate_skill(&candidate);
assert!(report.passed);
assert!(report.issues.is_empty());
}
#[test]
fn test_validate_low_confidence() {
let gate = QualityGate::new(0.7, vec![]);
let mut candidate = make_valid_candidate();
candidate.confidence = 0.5;
let report = gate.validate_skill(&candidate);
assert!(!report.passed);
assert!(report.issues.iter().any(|i| i.contains("置信度")));
}
#[test]
fn test_validate_empty_name() {
let gate = QualityGate::new(0.5, vec![]);
let mut candidate = make_valid_candidate();
candidate.name = "".to_string();
let report = gate.validate_skill(&candidate);
assert!(!report.passed);
assert!(report.issues.iter().any(|i| i.contains("名称")));
}
#[test]
fn test_validate_empty_triggers() {
let gate = QualityGate::new(0.5, vec![]);
let mut candidate = make_valid_candidate();
candidate.triggers = vec![];
let report = gate.validate_skill(&candidate);
assert!(!report.passed);
assert!(report.issues.iter().any(|i| i.contains("触发词")));
}
#[test]
fn test_validate_trigger_conflict() {
let gate = QualityGate::new(0.5, vec!["报表".to_string()]);
let candidate = make_valid_candidate();
let report = gate.validate_skill(&candidate);
assert!(!report.passed);
assert!(report.issues.iter().any(|i| i.contains("冲突")));
}
#[test]
fn test_validate_empty_body() {
let gate = QualityGate::new(0.5, vec![]);
let mut candidate = make_valid_candidate();
candidate.body_markdown = "".to_string();
let report = gate.validate_skill(&candidate);
assert!(!report.passed);
assert!(report.issues.iter().any(|i| i.contains("正文")));
}
#[test]
fn test_validate_multiple_issues() {
let gate = QualityGate::new(0.9, vec![]);
let mut candidate = make_valid_candidate();
candidate.confidence = 0.3;
candidate.triggers = vec![];
candidate.body_markdown = "".to_string();
let report = gate.validate_skill(&candidate);
assert!(!report.passed);
assert!(report.issues.len() >= 3);
}
}

View File

@@ -19,7 +19,7 @@ struct CacheEntry {
}
/// Cache key for efficient lookups (reserved for future cache optimization)
#[allow(dead_code)]
#[allow(dead_code)] // @reserved: post-release cache optimization lookups
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
struct CacheKey {
agent_id: String,

View File

@@ -0,0 +1,164 @@
//! 技能生成器
//! 将聚合的经验模式通过 LLM 转化为 SKILL.md 内容
//! 提供 prompt 构建和 JSON 结果解析
use crate::pattern_aggregator::AggregatedPattern;
use zclaw_types::Result;
/// 技能候选项
#[derive(Debug, Clone)]
pub struct SkillCandidate {
pub name: String,
pub description: String,
pub triggers: Vec<String>,
pub tools: Vec<String>,
pub body_markdown: String,
pub source_pattern: String,
pub confidence: f32,
/// 技能版本号,用于后续迭代追踪
pub version: u32,
}
/// LLM 驱动的技能生成 prompt
const SKILL_GENERATION_PROMPT: &str = r#"
你是一个技能设计专家。根据以下用户反复出现的问题和解决步骤,生成一个可复用的技能定义。
问题模式:{pain_pattern}
解决步骤:{steps}
使用的工具:{tools}
行业背景:{industry}
请生成以下 JSON
```json
{
"name": "技能名称(简短中文)",
"description": "技能描述(一段话)",
"triggers": ["触发词1", "触发词2", "触发词3"],
"tools": ["tool1", "tool2"],
"body_markdown": "技能的 Markdown 正文,包含步骤说明",
"confidence": 0.85
}
```
"#;
/// 技能生成器
/// 负责 prompt 构建和 LLM 返回的 JSON 解析
pub struct SkillGenerator;
impl SkillGenerator {
pub fn new() -> Self {
Self
}
/// 从聚合模式构建 LLM prompt
pub fn build_prompt(pattern: &AggregatedPattern) -> String {
SKILL_GENERATION_PROMPT
.replace("{pain_pattern}", &pattern.pain_pattern)
.replace("{steps}", &pattern.common_steps.join(""))
.replace("{tools}", &pattern.tools_used.join(", "))
.replace("{industry}", pattern.industry_context.as_deref().unwrap_or("通用"))
}
/// 解析 LLM 返回的 JSON 为 SkillCandidate
pub fn parse_response(json_str: &str, pattern: &AggregatedPattern) -> Result<SkillCandidate> {
let json_str = crate::json_utils::extract_json_block(json_str);
let raw: serde_json::Value = serde_json::from_str(&json_str).map_err(|e| {
zclaw_types::ZclawError::ConfigError(format!("Invalid skill JSON: {}", e))
})?;
Ok(SkillCandidate {
name: raw["name"]
.as_str()
.unwrap_or("未命名技能")
.to_string(),
description: raw["description"].as_str().unwrap_or("").to_string(),
triggers: crate::json_utils::extract_string_array(&raw, "triggers"),
tools: crate::json_utils::extract_string_array(&raw, "tools"),
body_markdown: raw["body_markdown"].as_str().unwrap_or("").to_string(),
source_pattern: pattern.pain_pattern.clone(),
confidence: raw["confidence"].as_f64().unwrap_or(0.5) as f32,
version: raw["version"].as_u64().unwrap_or(1) as u32,
})
}
}
impl Default for SkillGenerator {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::experience_store::Experience;
fn make_pattern() -> AggregatedPattern {
let exp = Experience::new(
"agent-1",
"报表生成",
"researcher",
vec!["查询数据库".into(), "格式化输出".into()],
"success",
);
AggregatedPattern {
pain_pattern: "报表生成".to_string(),
experiences: vec![exp],
common_steps: vec!["查询数据库".into(), "格式化输出".into()],
total_reuse: 5,
tools_used: vec!["researcher".into()],
industry_context: Some("healthcare".into()),
}
}
#[test]
fn test_build_prompt() {
let pattern = make_pattern();
let prompt = SkillGenerator::build_prompt(&pattern);
assert!(prompt.contains("报表生成"));
assert!(prompt.contains("查询数据库"));
assert!(prompt.contains("researcher"));
assert!(prompt.contains("healthcare"));
}
#[test]
fn test_parse_response_valid_json() {
let pattern = make_pattern();
let json = r##"{"name":"每日报表","description":"生成每日报表","triggers":["报表","日报"],"tools":["researcher"],"body_markdown":"# 每日报表\n步骤1","confidence":0.9}"##;
let candidate = SkillGenerator::parse_response(json, &pattern).unwrap();
assert_eq!(candidate.name, "每日报表");
assert_eq!(candidate.triggers.len(), 2);
assert_eq!(candidate.confidence, 0.9);
assert_eq!(candidate.source_pattern, "报表生成");
}
#[test]
fn test_parse_response_json_block() {
let pattern = make_pattern();
let text = r#"```json
{"name":"技能A","description":"desc","triggers":["a"],"tools":[],"body_markdown":"body","confidence":0.8}
```"#;
let candidate = SkillGenerator::parse_response(text, &pattern).unwrap();
assert_eq!(candidate.name, "技能A");
}
#[test]
fn test_parse_response_invalid_json() {
let pattern = make_pattern();
let result = SkillGenerator::parse_response("not json at all", &pattern);
assert!(result.is_err());
}
#[test]
fn test_extract_json_block_with_markdown() {
let text = "Here is the result:\n```json\n{\"key\": \"value\"}\n```\nDone.";
assert_eq!(crate::json_utils::extract_json_block(text), "{\"key\": \"value\"}");
}
#[test]
fn test_extract_json_block_bare() {
let text = "{\"key\": \"value\"}";
assert_eq!(crate::json_utils::extract_json_block(text), "{\"key\": \"value\"}");
}
}

View File

@@ -22,7 +22,7 @@ pub struct SqliteStorage {
/// Semantic scorer for similarity computation
scorer: Arc<RwLock<SemanticScorer>>,
/// Database path (for reference)
#[allow(dead_code)]
#[allow(dead_code)] // @reserved: db path for diagnostics and reconnect
path: PathBuf,
}
@@ -132,13 +132,16 @@ impl SqliteStorage {
.map_err(|e| ZclawError::StorageError(format!("Failed to create memories table: {}", e)))?;
// Create FTS5 virtual table for full-text search
// Use trigram tokenizer for CJK (Chinese/Japanese/Korean) support.
// unicode61 cannot tokenize CJK characters, causing memory search to fail.
// trigram indexes overlapping 3-character slices, works well for all languages.
sqlx::query(
r#"
CREATE VIRTUAL TABLE IF NOT EXISTS memories_fts USING fts5(
uri,
content,
keywords,
tokenize='unicode61'
tokenize='trigram'
)
"#,
)
@@ -159,22 +162,77 @@ impl SqliteStorage {
.map_err(|e| ZclawError::StorageError(format!("Failed to create importance index: {}", e)))?;
// Migration: add overview column (L1 summary)
let _ = sqlx::query("ALTER TABLE memories ADD COLUMN overview TEXT")
// SQLite ALTER TABLE ADD COLUMN fails with "duplicate column name" if already applied
if let Err(e) = sqlx::query("ALTER TABLE memories ADD COLUMN overview TEXT")
.execute(&self.pool)
.await;
.await
{
let msg = e.to_string();
if !msg.contains("duplicate column name") {
tracing::warn!("[Growth] Migration overview failed: {}", msg);
}
}
// Migration: add abstract_summary column (L0 keywords)
let _ = sqlx::query("ALTER TABLE memories ADD COLUMN abstract_summary TEXT")
if let Err(e) = sqlx::query("ALTER TABLE memories ADD COLUMN abstract_summary TEXT")
.execute(&self.pool)
.await;
.await
{
let msg = e.to_string();
if !msg.contains("duplicate column name") {
tracing::warn!("[Growth] Migration abstract_summary failed: {}", msg);
}
}
// P2-24: Migration — content fingerprint for deduplication
let _ = sqlx::query("ALTER TABLE memories ADD COLUMN content_hash TEXT")
if let Err(e) = sqlx::query("ALTER TABLE memories ADD COLUMN content_hash TEXT")
.execute(&self.pool)
.await;
let _ = sqlx::query("CREATE INDEX IF NOT EXISTS idx_content_hash ON memories(content_hash)")
.await
{
let msg = e.to_string();
if !msg.contains("duplicate column name") {
tracing::warn!("[Growth] Migration content_hash failed: {}", msg);
}
}
if let Err(e) = sqlx::query("CREATE INDEX IF NOT EXISTS idx_content_hash ON memories(content_hash)")
.execute(&self.pool)
.await;
.await
{
tracing::warn!("[Growth] Migration idx_content_hash failed: {}", e);
}
// Backfill content_hash for existing entries that have NULL content_hash
{
use std::hash::{Hash, Hasher};
let rows: Vec<(String, String)> = sqlx::query_as(
"SELECT uri, content FROM memories WHERE content_hash IS NULL"
)
.fetch_all(&self.pool)
.await
.unwrap_or_default();
if !rows.is_empty() {
for (uri, content) in &rows {
let normalized = content.trim().to_lowercase();
let mut hasher = std::collections::hash_map::DefaultHasher::new();
normalized.hash(&mut hasher);
let hash = format!("{:016x}", hasher.finish());
if let Err(e) = sqlx::query("UPDATE memories SET content_hash = ? WHERE uri = ?")
.bind(&hash)
.bind(uri)
.execute(&self.pool)
.await
{
tracing::warn!("[sqlite] content_hash update failed for {}: {}", uri, e);
}
}
tracing::info!(
"[SqliteStorage] Backfilled content_hash for {} existing entries",
rows.len()
);
}
}
// Create metadata table
sqlx::query(
@@ -189,6 +247,49 @@ impl SqliteStorage {
.await
.map_err(|e| ZclawError::StorageError(format!("Failed to create metadata table: {}", e)))?;
// Migration: Rebuild FTS5 table if using old unicode61 tokenizer (can't handle CJK)
// Check tokenizer by inspecting the existing FTS5 table definition
let needs_rebuild: bool = sqlx::query_scalar::<_, i64>(
"SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='memories_fts' AND sql LIKE '%unicode61%'"
)
.fetch_one(&self.pool)
.await
.unwrap_or(0) > 0;
if needs_rebuild {
tracing::info!("[SqliteStorage] Rebuilding FTS5 table: unicode61 → trigram for CJK support");
// Drop old FTS5 table
if let Err(e) = sqlx::query("DROP TABLE IF EXISTS memories_fts")
.execute(&self.pool)
.await
{
tracing::warn!("[sqlite] FTS5 table drop failed during rebuild: {}", e);
}
// Recreate with trigram tokenizer
sqlx::query(
r#"
CREATE VIRTUAL TABLE IF NOT EXISTS memories_fts USING fts5(
uri,
content,
keywords,
tokenize='trigram'
)
"#,
)
.execute(&self.pool)
.await
.map_err(|e| ZclawError::StorageError(format!("Failed to recreate FTS5 table: {}", e)))?;
// Reindex all existing memories into FTS5
let reindexed = sqlx::query(
"INSERT INTO memories_fts (uri, content, keywords) SELECT uri, content, keywords FROM memories"
)
.execute(&self.pool)
.await
.map(|r| r.rows_affected())
.unwrap_or(0);
tracing::info!("[SqliteStorage] FTS5 rebuild complete, reindexed {} entries", reindexed);
}
tracing::info!("[SqliteStorage] Database schema initialized");
Ok(())
}
@@ -328,14 +429,17 @@ impl SqliteStorage {
.await;
// Also clean up FTS entries for archived memories
let _ = sqlx::query(
if let Err(e) = sqlx::query(
r#"
DELETE FROM memories_fts
WHERE uri NOT IN (SELECT uri FROM memories)
"#,
)
.execute(&self.pool)
.await;
.await
{
tracing::warn!("[sqlite] FTS cleanup after archive failed: {}", e);
}
let archived = archive_result
.map(|r| r.rows_affected())
@@ -378,20 +482,83 @@ impl SqliteStorage {
/// Strips these and keeps only alphanumeric + CJK tokens with length > 1,
/// then joins them with `OR` for broad matching.
fn sanitize_fts_query(query: &str) -> String {
let terms: Vec<String> = query
.to_lowercase()
// trigram tokenizer requires quoted phrases for substring matching
// and needs at least 3 characters per term to produce results.
let lower = query.to_lowercase();
// Check if query contains CJK characters — trigram handles them natively
let has_cjk = lower.chars().any(|c| {
matches!(c, '\u{4E00}'..='\u{9FFF}' | '\u{3400}'..='\u{4DBF}' | '\u{F900}'..='\u{FAFF}')
});
if has_cjk {
// For CJK queries, extract tokens: CJK character sequences and ASCII words.
// Join with OR for broad matching (not exact phrase, which would miss scattered terms).
let mut tokens: Vec<String> = Vec::new();
let mut cjk_buf = String::new();
let mut ascii_buf = String::new();
for ch in lower.chars() {
let is_cjk = matches!(ch, '\u{4E00}'..='\u{9FFF}' | '\u{3400}'..='\u{4DBF}' | '\u{F900}'..='\u{FAFF}');
if is_cjk {
if !ascii_buf.is_empty() {
if ascii_buf.len() >= 2 {
tokens.push(format!("\"{}\"", ascii_buf));
}
ascii_buf.clear();
}
cjk_buf.push(ch);
} else if ch.is_alphanumeric() {
if !cjk_buf.is_empty() {
// Flush CJK buffer — each CJK character is a potential token
// (trigram indexes 3-char sequences, so single CJK chars won't
// match alone, but 2+ char sequences will)
if cjk_buf.len() >= 2 {
tokens.push(format!("\"{}\"", cjk_buf));
}
cjk_buf.clear();
}
ascii_buf.push(ch);
} else {
// Separator — flush both buffers
if cjk_buf.len() >= 2 {
tokens.push(format!("\"{}\"", cjk_buf));
}
cjk_buf.clear();
if ascii_buf.len() >= 2 {
tokens.push(format!("\"{}\"", ascii_buf));
}
ascii_buf.clear();
}
}
// Flush remaining
if cjk_buf.len() >= 2 {
tokens.push(format!("\"{}\"", cjk_buf));
}
if ascii_buf.len() >= 2 {
tokens.push(format!("\"{}\"", ascii_buf));
}
if tokens.is_empty() {
return String::new();
}
tokens.join(" OR ")
} else {
// For non-CJK, split into terms and join with OR
let terms: Vec<String> = lower
.split(|c: char| !c.is_alphanumeric())
.filter(|s| !s.is_empty() && s.len() > 1)
.map(|s| s.to_string())
.map(|s| format!("\"{}\"", s))
.collect();
if terms.is_empty() {
return String::new();
}
// Join with OR so any term can match (broad recall, then rerank by similarity)
terms.join(" OR ")
}
}
/// Fetch memories by scope with importance-based ordering.
/// Used internally by find() for scope-based queries.

View File

@@ -66,21 +66,30 @@ impl GrowthTracker {
timestamp: Utc::now(),
};
// Store learning event
self.viking
.store_metadata(
&format!("agent://{}/events/{}", agent_id, session_id),
&event,
)
.await?;
// Store learning event as MemoryEntry so get_timeline can find it via find_by_prefix
let event_uri = format!("agent://{}/events/{}", agent_id, session_id);
let content = serde_json::to_string(&event)?;
let entry = crate::types::MemoryEntry {
uri: event_uri,
memory_type: MemoryType::Session,
content,
keywords: vec![agent_id.to_string(), session_id.to_string()],
importance: 5,
access_count: 0,
created_at: event.timestamp,
last_accessed: event.timestamp,
overview: None,
abstract_summary: None,
};
self.viking.store(&entry).await?;
// Update last learning time
// Update last learning time via metadata
self.viking
.store_metadata(
&format!("agent://{}", agent_id),
&AgentMetadata {
last_learning_time: Some(Utc::now()),
total_learning_events: None, // Will be computed
total_learning_events: None,
},
)
.await?;

View File

@@ -394,6 +394,103 @@ pub struct DecayResult {
pub archived: u64,
}
// === Evolution Engine Types ===
/// 经验提取结果
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExperienceCandidate {
pub pain_pattern: String,
pub context: String,
pub solution_steps: Vec<String>,
pub outcome: Outcome,
pub confidence: f32,
pub tools_used: Vec<String>,
pub industry_context: Option<String>,
}
/// 结果状态
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum Outcome {
Success,
Partial,
Failed,
}
/// 合并提取结果(单次 LLM 调用的全部输出)
#[derive(Debug, Clone, Default)]
pub struct CombinedExtraction {
pub memories: Vec<ExtractedMemory>,
pub experiences: Vec<ExperienceCandidate>,
pub profile_signals: ProfileSignals,
}
/// 画像更新信号(从提取结果中推断,不额外调用 LLM
#[derive(Debug, Clone, Default)]
pub struct ProfileSignals {
pub industry: Option<String>,
pub recent_topic: Option<String>,
pub pain_point: Option<String>,
pub preferred_tool: Option<String>,
pub communication_style: Option<String>,
}
impl ProfileSignals {
/// 是否包含至少一个有效信号
pub fn has_any_signal(&self) -> bool {
self.industry.is_some()
|| self.recent_topic.is_some()
|| self.pain_point.is_some()
|| self.preferred_tool.is_some()
|| self.communication_style.is_some()
}
/// 有效信号数量
pub fn signal_count(&self) -> usize {
let mut count = 0;
if self.industry.is_some() { count += 1; }
if self.recent_topic.is_some() { count += 1; }
if self.pain_point.is_some() { count += 1; }
if self.preferred_tool.is_some() { count += 1; }
if self.communication_style.is_some() { count += 1; }
count
}
}
/// 进化事件
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EvolutionEvent {
pub id: String,
pub event_type: EvolutionEventType,
pub artifact_type: ArtifactType,
pub artifact_id: String,
pub status: EvolutionStatus,
pub confidence: f32,
pub user_feedback: Option<String>,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum EvolutionEventType {
SkillGenerated,
SkillOptimized,
WorkflowGenerated,
WorkflowOptimized,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum ArtifactType {
Skill,
Pipeline,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum EvolutionStatus {
Pending,
Confirmed,
Rejected,
Optimized,
}
/// Compute effective importance with time decay.
///
/// Uses exponential decay: each 30-day period of non-access reduces
@@ -524,4 +621,61 @@ mod tests {
assert!(!result.is_empty());
assert_eq!(result.total_count(), 1);
}
#[test]
fn test_experience_candidate_roundtrip() {
let candidate = ExperienceCandidate {
pain_pattern: "报表生成".to_string(),
context: "月度销售报表".to_string(),
solution_steps: vec!["查询数据库".to_string(), "格式化输出".to_string()],
outcome: Outcome::Success,
confidence: 0.85,
tools_used: vec!["researcher".to_string()],
industry_context: Some("healthcare".to_string()),
};
let json = serde_json::to_string(&candidate).unwrap();
let decoded: ExperienceCandidate = serde_json::from_str(&json).unwrap();
assert_eq!(decoded.pain_pattern, "报表生成");
assert_eq!(decoded.outcome, Outcome::Success);
assert_eq!(decoded.solution_steps.len(), 2);
}
#[test]
fn test_evolution_event_roundtrip() {
let event = EvolutionEvent {
id: uuid::Uuid::new_v4().to_string(),
event_type: EvolutionEventType::SkillGenerated,
artifact_type: ArtifactType::Skill,
artifact_id: "daily-report".to_string(),
status: EvolutionStatus::Pending,
confidence: 0.8,
user_feedback: None,
created_at: chrono::Utc::now(),
};
let json = serde_json::to_string(&event).unwrap();
let decoded: EvolutionEvent = serde_json::from_str(&json).unwrap();
assert_eq!(decoded.event_type, EvolutionEventType::SkillGenerated);
assert_eq!(decoded.status, EvolutionStatus::Pending);
}
#[test]
fn test_combined_extraction_default() {
let combined = CombinedExtraction::default();
assert!(combined.memories.is_empty());
assert!(combined.experiences.is_empty());
assert!(combined.profile_signals.industry.is_none());
}
#[test]
fn test_profile_signals() {
let signals = ProfileSignals {
industry: Some("healthcare".to_string()),
recent_topic: Some("报表".to_string()),
pain_point: None,
preferred_tool: Some("researcher".to_string()),
communication_style: Some("concise".to_string()),
};
assert_eq!(signals.industry.as_deref(), Some("healthcare"));
assert!(signals.pain_point.is_none());
}
}

View File

@@ -0,0 +1,180 @@
//! 工作流组装器L3 工作流进化)
//! 从轨迹数据中分析重复的工具链模式,自动组装 Pipeline YAML
//! 触发条件CompressedTrajectory 中出现 2 次以上相同工具链序列
use zclaw_types::Result;
/// Pipeline 候选项
#[derive(Debug, Clone)]
pub struct PipelineCandidate {
pub name: String,
pub description: String,
pub triggers: Vec<String>,
pub yaml_content: String,
pub source_sessions: Vec<String>,
pub confidence: f32,
}
/// 工具链模式(用于聚类分析)
#[derive(Debug, Clone, Hash, PartialEq, Eq)]
pub struct ToolChainPattern {
pub steps: Vec<String>,
}
/// 工作流组装 prompt
const WORKFLOW_GENERATION_PROMPT: &str = r#"
你是一个工作流设计专家。根据以下用户反复执行的工具链序列,设计一个可复用的 Pipeline 工作流。
工具链序列:{tool_chain}
执行频率:{frequency} 次
行业背景:{industry}
请生成以下 JSON
```json
{
"name": "工作流名称(简短中文)",
"description": "工作流描述",
"triggers": ["触发词1", "触发词2"],
"yaml_content": "Pipeline YAML 内容",
"confidence": 0.8
}
```
"#;
/// 工作流组装器
/// 分析压缩轨迹中的工具链模式,通过 LLM 生成 Pipeline YAML
pub struct WorkflowComposer;
impl WorkflowComposer {
pub fn new() -> Self {
Self
}
/// 从压缩轨迹的工具链中提取模式
/// 简单的精确匹配聚类:相同工具链序列视为同一模式
pub fn extract_patterns(
trajectories: &[(String, Vec<String>)], // (session_id, tools_used)
) -> Vec<(ToolChainPattern, Vec<String>)> {
use std::collections::HashMap;
let mut groups: HashMap<ToolChainPattern, Vec<String>> = HashMap::new();
for (session_id, tools) in trajectories {
if tools.len() < 2 {
continue; // 单步操作不构成工作流
}
let pattern = ToolChainPattern {
steps: tools.clone(),
};
groups.entry(pattern).or_default().push(session_id.clone());
}
// 过滤出现 2 次以上的模式
groups
.into_iter()
.filter(|(_, sessions)| sessions.len() >= 2)
.collect()
}
/// 构建 LLM prompt
pub fn build_prompt(
pattern: &ToolChainPattern,
frequency: usize,
industry: Option<&str>,
) -> String {
WORKFLOW_GENERATION_PROMPT
.replace("{tool_chain}", &pattern.steps.join(""))
.replace("{frequency}", &frequency.to_string())
.replace("{industry}", industry.unwrap_or("通用"))
}
/// 解析 LLM 返回的 JSON 为 PipelineCandidate
pub fn parse_response(
json_str: &str,
_pattern: &ToolChainPattern,
source_sessions: Vec<String>,
) -> Result<PipelineCandidate> {
let json_str = crate::json_utils::extract_json_block(json_str);
let raw: serde_json::Value = serde_json::from_str(&json_str).map_err(|e| {
zclaw_types::ZclawError::ConfigError(format!("Invalid pipeline JSON: {}", e))
})?;
Ok(PipelineCandidate {
name: raw["name"].as_str().unwrap_or("未命名工作流").to_string(),
description: raw["description"].as_str().unwrap_or("").to_string(),
triggers: crate::json_utils::extract_string_array(&raw, "triggers"),
yaml_content: raw["yaml_content"].as_str().unwrap_or("").to_string(),
source_sessions,
confidence: raw["confidence"].as_f64().unwrap_or(0.5) as f32,
})
}
}
impl Default for WorkflowComposer {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_extract_patterns_filters_single_step() {
let trajectories = vec![
("s1".to_string(), vec!["researcher".to_string()]),
];
let patterns = WorkflowComposer::extract_patterns(&trajectories);
assert!(patterns.is_empty());
}
#[test]
fn test_extract_patterns_groups_identical_chains() {
let trajectories = vec![
("s1".to_string(), vec!["researcher".into(), "collector".into()]),
("s2".to_string(), vec!["researcher".into(), "collector".into()]),
("s3".to_string(), vec!["browser".into()]), // 单步,过滤
];
let patterns = WorkflowComposer::extract_patterns(&trajectories);
assert_eq!(patterns.len(), 1);
assert_eq!(patterns[0].1.len(), 2); // 2 sessions
}
#[test]
fn test_extract_patterns_requires_min_2() {
let trajectories = vec![
("s1".to_string(), vec!["a".into(), "b".into()]),
];
let patterns = WorkflowComposer::extract_patterns(&trajectories);
assert!(patterns.is_empty()); // 只出现 1 次
}
#[test]
fn test_build_prompt() {
let pattern = ToolChainPattern {
steps: vec!["researcher".into(), "collector".into(), "summarize".into()],
};
let prompt = WorkflowComposer::build_prompt(&pattern, 3, Some("healthcare"));
assert!(prompt.contains("researcher"));
assert!(prompt.contains("3"));
assert!(prompt.contains("healthcare"));
}
#[test]
fn test_parse_response() {
let pattern = ToolChainPattern {
steps: vec!["researcher".into()],
};
let json = r##"{"name":"每日简报","description":"搜索+汇总","triggers":["简报","日报"],"yaml_content":"steps: []","confidence":0.85}"##;
let candidate = WorkflowComposer::parse_response(
json,
&pattern,
vec!["s1".into(), "s2".into()],
)
.unwrap();
assert_eq!(candidate.name, "每日简报");
assert_eq!(candidate.triggers.len(), 2);
assert_eq!(candidate.source_sessions.len(), 2);
assert!((candidate.confidence - 0.85).abs() < 0.01);
}
}

View File

@@ -459,7 +459,7 @@ impl ClipHand {
let args = vec![
"-f", "concat",
"-safe", "0",
"-i", temp_file.to_str().unwrap(),
"-i", temp_file.to_str().ok_or_else(|| zclaw_types::ZclawError::HandError("Temp file path is not valid UTF-8".to_string()))?,
"-c", "copy",
&config.output_path,
];

View File

@@ -1,9 +1,6 @@
//! Educational Hands - Teaching and presentation capabilities
//!
//! This module provides hands for interactive classroom experiences:
//! - Whiteboard: Drawing and annotation
//! - Slideshow: Presentation control
//! - Speech: Text-to-speech synthesis
//! This module provides hands for interactive experiences:
//! - Quiz: Assessment and evaluation
//! - Browser: Web automation
//! - Researcher: Deep research and analysis
@@ -11,22 +8,18 @@
//! - Clip: Video processing
//! - Twitter: Social media automation
mod whiteboard;
mod slideshow;
mod speech;
pub mod quiz;
mod browser;
mod researcher;
mod collector;
mod clip;
mod twitter;
pub mod reminder;
pub use whiteboard::*;
pub use slideshow::*;
pub use speech::*;
pub use quiz::*;
pub use browser::*;
pub use researcher::*;
pub use collector::*;
pub use clip::*;
pub use twitter::*;
pub use reminder::*;

View File

@@ -0,0 +1,77 @@
//! Reminder Hand - Internal hand for scheduled reminders
//!
//! This is a system hand (id `_reminder`) used by the schedule interception
//! layer in `agent_chat_stream`. When the NlScheduleParser detects a schedule
//! intent in chat, it creates a trigger targeting this hand. The SchedulerService
//! fires the trigger at the scheduled time.
use async_trait::async_trait;
use serde_json::Value;
use zclaw_types::Result;
use crate::{Hand, HandConfig, HandContext, HandResult, HandStatus};
/// Internal reminder hand for scheduled tasks
pub struct ReminderHand {
config: HandConfig,
}
impl ReminderHand {
/// Create a new reminder hand
pub fn new() -> Self {
Self {
config: HandConfig {
id: "_reminder".to_string(),
name: "定时提醒".to_string(),
description: "Internal hand for scheduled reminders".to_string(),
needs_approval: false,
dependencies: vec![],
input_schema: None,
tags: vec!["system".to_string()],
enabled: true,
max_concurrent: 0,
timeout_secs: 0,
},
}
}
}
#[async_trait]
impl Hand for ReminderHand {
fn config(&self) -> &HandConfig {
&self.config
}
async fn execute(&self, _context: &HandContext, input: Value) -> Result<HandResult> {
let task_desc = input
.get("task_description")
.and_then(|v| v.as_str())
.unwrap_or("定时提醒");
let cron = input
.get("cron")
.and_then(|v| v.as_str())
.unwrap_or("");
let fired_at = input
.get("fired_at")
.and_then(|v| v.as_str())
.unwrap_or("unknown time");
tracing::info!(
"[ReminderHand] Fired at {} — task: {}, cron: {}",
fired_at, task_desc, cron
);
Ok(HandResult::success(serde_json::json!({
"task": task_desc,
"cron": cron,
"fired_at": fired_at,
"status": "reminded",
})))
}
fn status(&self) -> HandStatus {
HandStatus::Idle
}
}

View File

@@ -1,797 +0,0 @@
//! Slideshow Hand - Presentation control capabilities
//!
//! Provides slideshow control for teaching:
//! - next_slide/prev_slide: Navigation
//! - goto_slide: Jump to specific slide
//! - spotlight: Highlight elements
//! - laser: Show laser pointer
//! - highlight: Highlight areas
//! - play_animation: Trigger animations
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::RwLock;
use zclaw_types::Result;
use crate::{Hand, HandConfig, HandContext, HandResult, HandStatus};
/// Slideshow action types
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "action", rename_all = "snake_case")]
pub enum SlideshowAction {
/// Go to next slide
NextSlide,
/// Go to previous slide
PrevSlide,
/// Go to specific slide
GotoSlide {
slide_number: usize,
},
/// Spotlight/highlight an element
Spotlight {
element_id: String,
#[serde(default = "default_spotlight_duration")]
duration_ms: u64,
},
/// Show laser pointer at position
Laser {
x: f64,
y: f64,
#[serde(default = "default_laser_duration")]
duration_ms: u64,
},
/// Highlight a rectangular area
Highlight {
x: f64,
y: f64,
width: f64,
height: f64,
#[serde(default)]
color: Option<String>,
#[serde(default = "default_highlight_duration")]
duration_ms: u64,
},
/// Play animation
PlayAnimation {
animation_id: String,
},
/// Pause auto-play
Pause,
/// Resume auto-play
Resume,
/// Start auto-play
AutoPlay {
#[serde(default = "default_interval")]
interval_ms: u64,
},
/// Stop auto-play
StopAutoPlay,
/// Get current state
GetState,
/// Set slide content (for dynamic slides)
SetContent {
slide_number: usize,
content: SlideContent,
},
}
fn default_spotlight_duration() -> u64 { 2000 }
fn default_laser_duration() -> u64 { 3000 }
fn default_highlight_duration() -> u64 { 2000 }
fn default_interval() -> u64 { 5000 }
/// Slide content structure
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SlideContent {
pub title: String,
#[serde(default)]
pub subtitle: Option<String>,
#[serde(default)]
pub content: Vec<ContentBlock>,
#[serde(default)]
pub notes: Option<String>,
#[serde(default)]
pub background: Option<String>,
}
/// Presentation/slideshow rendering content block. Domain-specific for slide content.
/// Distinct from zclaw_types::ContentBlock (LLM messages) and zclaw_protocols::ContentBlock (MCP).
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ContentBlock {
Text { text: String, style: Option<TextStyle> },
Image { url: String, alt: Option<String> },
List { items: Vec<String>, ordered: bool },
Code { code: String, language: Option<String> },
Math { latex: String },
Table { headers: Vec<String>, rows: Vec<Vec<String>> },
Chart { chart_type: String, data: serde_json::Value },
}
/// Text style options
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct TextStyle {
#[serde(default)]
pub bold: bool,
#[serde(default)]
pub italic: bool,
#[serde(default)]
pub size: Option<u32>,
#[serde(default)]
pub color: Option<String>,
}
/// Slideshow state
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SlideshowState {
pub current_slide: usize,
pub total_slides: usize,
pub is_playing: bool,
pub auto_play_interval_ms: u64,
pub slides: Vec<SlideContent>,
}
impl Default for SlideshowState {
fn default() -> Self {
Self {
current_slide: 0,
total_slides: 0,
is_playing: false,
auto_play_interval_ms: 5000,
slides: Vec::new(),
}
}
}
/// Slideshow Hand implementation
pub struct SlideshowHand {
config: HandConfig,
state: Arc<RwLock<SlideshowState>>,
}
impl SlideshowHand {
/// Create a new slideshow hand
pub fn new() -> Self {
Self {
config: HandConfig {
id: "slideshow".to_string(),
name: "幻灯片".to_string(),
description: "控制演示文稿的播放、导航和标注".to_string(),
needs_approval: false,
dependencies: vec![],
input_schema: Some(serde_json::json!({
"type": "object",
"properties": {
"action": { "type": "string" },
"slide_number": { "type": "integer" },
"element_id": { "type": "string" },
}
})),
tags: vec!["presentation".to_string(), "education".to_string()],
enabled: true,
max_concurrent: 0,
timeout_secs: 0,
},
state: Arc::new(RwLock::new(SlideshowState::default())),
}
}
/// Create with slides (async version)
pub async fn with_slides_async(slides: Vec<SlideContent>) -> Self {
let hand = Self::new();
let mut state = hand.state.write().await;
state.total_slides = slides.len();
state.slides = slides;
drop(state);
hand
}
/// Execute a slideshow action
pub async fn execute_action(&self, action: SlideshowAction) -> Result<HandResult> {
let mut state = self.state.write().await;
match action {
SlideshowAction::NextSlide => {
if state.current_slide < state.total_slides.saturating_sub(1) {
state.current_slide += 1;
}
Ok(HandResult::success(serde_json::json!({
"status": "next",
"current_slide": state.current_slide,
"total_slides": state.total_slides,
})))
}
SlideshowAction::PrevSlide => {
if state.current_slide > 0 {
state.current_slide -= 1;
}
Ok(HandResult::success(serde_json::json!({
"status": "prev",
"current_slide": state.current_slide,
"total_slides": state.total_slides,
})))
}
SlideshowAction::GotoSlide { slide_number } => {
if slide_number < state.total_slides {
state.current_slide = slide_number;
Ok(HandResult::success(serde_json::json!({
"status": "goto",
"current_slide": state.current_slide,
"slide_content": state.slides.get(slide_number),
})))
} else {
Ok(HandResult::error(format!("Slide {} out of range", slide_number)))
}
}
SlideshowAction::Spotlight { element_id, duration_ms } => {
Ok(HandResult::success(serde_json::json!({
"status": "spotlight",
"element_id": element_id,
"duration_ms": duration_ms,
})))
}
SlideshowAction::Laser { x, y, duration_ms } => {
Ok(HandResult::success(serde_json::json!({
"status": "laser",
"x": x,
"y": y,
"duration_ms": duration_ms,
})))
}
SlideshowAction::Highlight { x, y, width, height, color, duration_ms } => {
Ok(HandResult::success(serde_json::json!({
"status": "highlight",
"x": x, "y": y,
"width": width, "height": height,
"color": color.unwrap_or_else(|| "#ffcc00".to_string()),
"duration_ms": duration_ms,
})))
}
SlideshowAction::PlayAnimation { animation_id } => {
Ok(HandResult::success(serde_json::json!({
"status": "animation",
"animation_id": animation_id,
})))
}
SlideshowAction::Pause => {
state.is_playing = false;
Ok(HandResult::success(serde_json::json!({
"status": "paused",
})))
}
SlideshowAction::Resume => {
state.is_playing = true;
Ok(HandResult::success(serde_json::json!({
"status": "resumed",
})))
}
SlideshowAction::AutoPlay { interval_ms } => {
state.is_playing = true;
state.auto_play_interval_ms = interval_ms;
Ok(HandResult::success(serde_json::json!({
"status": "autoplay",
"interval_ms": interval_ms,
})))
}
SlideshowAction::StopAutoPlay => {
state.is_playing = false;
Ok(HandResult::success(serde_json::json!({
"status": "stopped",
})))
}
SlideshowAction::GetState => {
Ok(HandResult::success(serde_json::to_value(&*state).unwrap_or(Value::Null)))
}
SlideshowAction::SetContent { slide_number, content } => {
if slide_number < state.slides.len() {
state.slides[slide_number] = content.clone();
Ok(HandResult::success(serde_json::json!({
"status": "content_set",
"slide_number": slide_number,
})))
} else if slide_number == state.slides.len() {
state.slides.push(content);
state.total_slides = state.slides.len();
Ok(HandResult::success(serde_json::json!({
"status": "slide_added",
"slide_number": slide_number,
})))
} else {
Ok(HandResult::error(format!("Invalid slide number: {}", slide_number)))
}
}
}
}
/// Get current state
pub async fn get_state(&self) -> SlideshowState {
self.state.read().await.clone()
}
/// Add a slide
pub async fn add_slide(&self, content: SlideContent) {
let mut state = self.state.write().await;
state.slides.push(content);
state.total_slides = state.slides.len();
}
}
impl Default for SlideshowHand {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl Hand for SlideshowHand {
fn config(&self) -> &HandConfig {
&self.config
}
async fn execute(&self, _context: &HandContext, input: Value) -> Result<HandResult> {
let action: SlideshowAction = match serde_json::from_value(input) {
Ok(a) => a,
Err(e) => {
return Ok(HandResult::error(format!("Invalid slideshow action: {}", e)));
}
};
self.execute_action(action).await
}
fn status(&self) -> HandStatus {
HandStatus::Idle
}
}
#[cfg(test)]
mod tests {
use super::*;
use serde_json::json;
// === Config & Defaults ===
#[tokio::test]
async fn test_slideshow_creation() {
let hand = SlideshowHand::new();
assert_eq!(hand.config().id, "slideshow");
assert_eq!(hand.config().name, "幻灯片");
assert!(!hand.config().needs_approval);
assert!(hand.config().enabled);
assert!(hand.config().tags.contains(&"presentation".to_string()));
}
#[test]
fn test_default_impl() {
let hand = SlideshowHand::default();
assert_eq!(hand.config().id, "slideshow");
}
#[test]
fn test_needs_approval() {
let hand = SlideshowHand::new();
assert!(!hand.needs_approval());
}
#[test]
fn test_status() {
let hand = SlideshowHand::new();
assert_eq!(hand.status(), HandStatus::Idle);
}
#[test]
fn test_default_state() {
let state = SlideshowState::default();
assert_eq!(state.current_slide, 0);
assert_eq!(state.total_slides, 0);
assert!(!state.is_playing);
assert_eq!(state.auto_play_interval_ms, 5000);
assert!(state.slides.is_empty());
}
// === Navigation ===
#[tokio::test]
async fn test_navigation() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "Slide 1".to_string(), subtitle: None, content: vec![], notes: None, background: None },
SlideContent { title: "Slide 2".to_string(), subtitle: None, content: vec![], notes: None, background: None },
SlideContent { title: "Slide 3".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
// Next
hand.execute_action(SlideshowAction::NextSlide).await.unwrap();
assert_eq!(hand.get_state().await.current_slide, 1);
// Goto
hand.execute_action(SlideshowAction::GotoSlide { slide_number: 2 }).await.unwrap();
assert_eq!(hand.get_state().await.current_slide, 2);
// Prev
hand.execute_action(SlideshowAction::PrevSlide).await.unwrap();
assert_eq!(hand.get_state().await.current_slide, 1);
}
#[tokio::test]
async fn test_next_slide_at_end() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "Only Slide".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
// At slide 0, should not advance past last slide
hand.execute_action(SlideshowAction::NextSlide).await.unwrap();
assert_eq!(hand.get_state().await.current_slide, 0);
}
#[tokio::test]
async fn test_prev_slide_at_beginning() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "Slide 1".to_string(), subtitle: None, content: vec![], notes: None, background: None },
SlideContent { title: "Slide 2".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
// At slide 0, should not go below 0
hand.execute_action(SlideshowAction::PrevSlide).await.unwrap();
assert_eq!(hand.get_state().await.current_slide, 0);
}
#[tokio::test]
async fn test_goto_slide_out_of_range() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "Slide 1".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
let result = hand.execute_action(SlideshowAction::GotoSlide { slide_number: 5 }).await.unwrap();
assert!(!result.success);
}
#[tokio::test]
async fn test_goto_slide_returns_content() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "First".to_string(), subtitle: None, content: vec![], notes: None, background: None },
SlideContent { title: "Second".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
let result = hand.execute_action(SlideshowAction::GotoSlide { slide_number: 1 }).await.unwrap();
assert!(result.success);
assert_eq!(result.output["slide_content"]["title"], "Second");
}
// === Spotlight & Laser & Highlight ===
#[tokio::test]
async fn test_spotlight() {
let hand = SlideshowHand::new();
let action = SlideshowAction::Spotlight {
element_id: "title".to_string(),
duration_ms: 2000,
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
assert_eq!(result.output["element_id"], "title");
assert_eq!(result.output["duration_ms"], 2000);
}
#[tokio::test]
async fn test_spotlight_default_duration() {
let hand = SlideshowHand::new();
let action = SlideshowAction::Spotlight {
element_id: "elem".to_string(),
duration_ms: default_spotlight_duration(),
};
let result = hand.execute_action(action).await.unwrap();
assert_eq!(result.output["duration_ms"], 2000);
}
#[tokio::test]
async fn test_laser() {
let hand = SlideshowHand::new();
let action = SlideshowAction::Laser {
x: 100.0,
y: 200.0,
duration_ms: 3000,
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
assert_eq!(result.output["x"], 100.0);
assert_eq!(result.output["y"], 200.0);
}
#[tokio::test]
async fn test_highlight_default_color() {
let hand = SlideshowHand::new();
let action = SlideshowAction::Highlight {
x: 10.0, y: 20.0, width: 100.0, height: 50.0,
color: None, duration_ms: 2000,
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
assert_eq!(result.output["color"], "#ffcc00");
}
#[tokio::test]
async fn test_highlight_custom_color() {
let hand = SlideshowHand::new();
let action = SlideshowAction::Highlight {
x: 0.0, y: 0.0, width: 50.0, height: 50.0,
color: Some("#ff0000".to_string()), duration_ms: 1000,
};
let result = hand.execute_action(action).await.unwrap();
assert_eq!(result.output["color"], "#ff0000");
}
// === AutoPlay / Pause / Resume ===
#[tokio::test]
async fn test_autoplay_pause_resume() {
let hand = SlideshowHand::new();
// AutoPlay
let result = hand.execute_action(SlideshowAction::AutoPlay { interval_ms: 3000 }).await.unwrap();
assert!(result.success);
assert!(hand.get_state().await.is_playing);
assert_eq!(hand.get_state().await.auto_play_interval_ms, 3000);
// Pause
hand.execute_action(SlideshowAction::Pause).await.unwrap();
assert!(!hand.get_state().await.is_playing);
// Resume
hand.execute_action(SlideshowAction::Resume).await.unwrap();
assert!(hand.get_state().await.is_playing);
// Stop
hand.execute_action(SlideshowAction::StopAutoPlay).await.unwrap();
assert!(!hand.get_state().await.is_playing);
}
#[tokio::test]
async fn test_autoplay_default_interval() {
let hand = SlideshowHand::new();
hand.execute_action(SlideshowAction::AutoPlay { interval_ms: default_interval() }).await.unwrap();
assert_eq!(hand.get_state().await.auto_play_interval_ms, 5000);
}
// === PlayAnimation ===
#[tokio::test]
async fn test_play_animation() {
let hand = SlideshowHand::new();
let result = hand.execute_action(SlideshowAction::PlayAnimation {
animation_id: "fade_in".to_string(),
}).await.unwrap();
assert!(result.success);
assert_eq!(result.output["animation_id"], "fade_in");
}
// === GetState ===
#[tokio::test]
async fn test_get_state() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "A".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
let result = hand.execute_action(SlideshowAction::GetState).await.unwrap();
assert!(result.success);
assert_eq!(result.output["total_slides"], 1);
assert_eq!(result.output["current_slide"], 0);
}
// === SetContent ===
#[tokio::test]
async fn test_set_content() {
let hand = SlideshowHand::new();
let content = SlideContent {
title: "Test Slide".to_string(),
subtitle: Some("Subtitle".to_string()),
content: vec![ContentBlock::Text {
text: "Hello".to_string(),
style: None,
}],
notes: Some("Speaker notes".to_string()),
background: None,
};
let result = hand.execute_action(SlideshowAction::SetContent {
slide_number: 0,
content,
}).await.unwrap();
assert!(result.success);
assert_eq!(hand.get_state().await.total_slides, 1);
assert_eq!(hand.get_state().await.slides[0].title, "Test Slide");
}
#[tokio::test]
async fn test_set_content_append() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "First".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
let content = SlideContent {
title: "Appended".to_string(), subtitle: None, content: vec![], notes: None, background: None,
};
let result = hand.execute_action(SlideshowAction::SetContent {
slide_number: 1,
content,
}).await.unwrap();
assert!(result.success);
assert_eq!(result.output["status"], "slide_added");
assert_eq!(hand.get_state().await.total_slides, 2);
}
#[tokio::test]
async fn test_set_content_invalid_index() {
let hand = SlideshowHand::new();
let content = SlideContent {
title: "Gap".to_string(), subtitle: None, content: vec![], notes: None, background: None,
};
let result = hand.execute_action(SlideshowAction::SetContent {
slide_number: 5,
content,
}).await.unwrap();
assert!(!result.success);
}
// === Action Deserialization ===
#[test]
fn test_deserialize_next_slide() {
let action: SlideshowAction = serde_json::from_value(json!({"action": "next_slide"})).unwrap();
assert!(matches!(action, SlideshowAction::NextSlide));
}
#[test]
fn test_deserialize_goto_slide() {
let action: SlideshowAction = serde_json::from_value(json!({"action": "goto_slide", "slide_number": 3})).unwrap();
match action {
SlideshowAction::GotoSlide { slide_number } => assert_eq!(slide_number, 3),
_ => panic!("Expected GotoSlide"),
}
}
#[test]
fn test_deserialize_laser() {
let action: SlideshowAction = serde_json::from_value(json!({
"action": "laser", "x": 50.0, "y": 75.0
})).unwrap();
match action {
SlideshowAction::Laser { x, y, .. } => {
assert_eq!(x, 50.0);
assert_eq!(y, 75.0);
}
_ => panic!("Expected Laser"),
}
}
#[test]
fn test_deserialize_autoplay() {
let action: SlideshowAction = serde_json::from_value(json!({"action": "auto_play"})).unwrap();
match action {
SlideshowAction::AutoPlay { interval_ms } => assert_eq!(interval_ms, 5000),
_ => panic!("Expected AutoPlay"),
}
}
#[test]
fn test_deserialize_invalid_action() {
let result = serde_json::from_value::<SlideshowAction>(json!({"action": "nonexistent"}));
assert!(result.is_err());
}
// === ContentBlock Deserialization ===
#[test]
fn test_content_block_text() {
let block: ContentBlock = serde_json::from_value(json!({
"type": "text", "text": "Hello"
})).unwrap();
match block {
ContentBlock::Text { text, style } => {
assert_eq!(text, "Hello");
assert!(style.is_none());
}
_ => panic!("Expected Text"),
}
}
#[test]
fn test_content_block_list() {
let block: ContentBlock = serde_json::from_value(json!({
"type": "list", "items": ["A", "B"], "ordered": true
})).unwrap();
match block {
ContentBlock::List { items, ordered } => {
assert_eq!(items, vec!["A", "B"]);
assert!(ordered);
}
_ => panic!("Expected List"),
}
}
#[test]
fn test_content_block_code() {
let block: ContentBlock = serde_json::from_value(json!({
"type": "code", "code": "fn main() {}", "language": "rust"
})).unwrap();
match block {
ContentBlock::Code { code, language } => {
assert_eq!(code, "fn main() {}");
assert_eq!(language, Some("rust".to_string()));
}
_ => panic!("Expected Code"),
}
}
#[test]
fn test_content_block_table() {
let block: ContentBlock = serde_json::from_value(json!({
"type": "table",
"headers": ["Name", "Age"],
"rows": [["Alice", "30"]]
})).unwrap();
match block {
ContentBlock::Table { headers, rows } => {
assert_eq!(headers, vec!["Name", "Age"]);
assert_eq!(rows, vec![vec!["Alice", "30"]]);
}
_ => panic!("Expected Table"),
}
}
// === Hand trait via execute ===
#[tokio::test]
async fn test_hand_execute_dispatch() {
let hand = SlideshowHand::with_slides_async(vec![
SlideContent { title: "S1".to_string(), subtitle: None, content: vec![], notes: None, background: None },
SlideContent { title: "S2".to_string(), subtitle: None, content: vec![], notes: None, background: None },
]).await;
let ctx = HandContext::default();
let result = hand.execute(&ctx, json!({"action": "next_slide"})).await.unwrap();
assert!(result.success);
assert_eq!(result.output["current_slide"], 1);
}
#[tokio::test]
async fn test_hand_execute_invalid_action() {
let hand = SlideshowHand::new();
let ctx = HandContext::default();
let result = hand.execute(&ctx, json!({"action": "invalid"})).await.unwrap();
assert!(!result.success);
}
// === add_slide helper ===
#[tokio::test]
async fn test_add_slide() {
let hand = SlideshowHand::new();
hand.add_slide(SlideContent {
title: "Dynamic".to_string(), subtitle: None, content: vec![], notes: None, background: None,
}).await;
hand.add_slide(SlideContent {
title: "Dynamic 2".to_string(), subtitle: None, content: vec![], notes: None, background: None,
}).await;
let state = hand.get_state().await;
assert_eq!(state.total_slides, 2);
assert_eq!(state.slides.len(), 2);
}
}

View File

@@ -1,442 +0,0 @@
//! Speech Hand - Text-to-Speech synthesis capabilities
//!
//! Provides speech synthesis for teaching:
//! - speak: Convert text to speech
//! - speak_ssml: Advanced speech with SSML markup
//! - pause/resume/stop: Playback control
//! - list_voices: Get available voices
//! - set_voice: Configure voice settings
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;
use tokio::sync::RwLock;
use zclaw_types::Result;
use crate::{Hand, HandConfig, HandContext, HandResult, HandStatus};
/// TTS Provider types
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
#[serde(rename_all = "lowercase")]
pub enum TtsProvider {
#[default]
Browser,
Azure,
OpenAI,
ElevenLabs,
Local,
}
/// Speech action types
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "action", rename_all = "snake_case")]
pub enum SpeechAction {
/// Speak text
Speak {
text: String,
#[serde(default)]
voice: Option<String>,
#[serde(default = "default_rate")]
rate: f32,
#[serde(default = "default_pitch")]
pitch: f32,
#[serde(default = "default_volume")]
volume: f32,
#[serde(default)]
language: Option<String>,
},
/// Speak with SSML markup
SpeakSsml {
ssml: String,
#[serde(default)]
voice: Option<String>,
},
/// Pause playback
Pause,
/// Resume playback
Resume,
/// Stop playback
Stop,
/// List available voices
ListVoices {
#[serde(default)]
language: Option<String>,
},
/// Set default voice
SetVoice {
voice: String,
#[serde(default)]
language: Option<String>,
},
/// Set provider
SetProvider {
provider: TtsProvider,
#[serde(default)]
api_key: Option<String>,
#[serde(default)]
region: Option<String>,
},
}
fn default_rate() -> f32 { 1.0 }
fn default_pitch() -> f32 { 1.0 }
fn default_volume() -> f32 { 1.0 }
/// Voice information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct VoiceInfo {
pub id: String,
pub name: String,
pub language: String,
pub gender: String,
#[serde(default)]
pub preview_url: Option<String>,
}
/// Playback state
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub enum PlaybackState {
#[default]
Idle,
Playing,
Paused,
}
/// Speech configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SpeechConfig {
pub provider: TtsProvider,
pub default_voice: Option<String>,
pub default_language: String,
pub default_rate: f32,
pub default_pitch: f32,
pub default_volume: f32,
}
impl Default for SpeechConfig {
fn default() -> Self {
Self {
provider: TtsProvider::Browser,
default_voice: None,
default_language: "zh-CN".to_string(),
default_rate: 1.0,
default_pitch: 1.0,
default_volume: 1.0,
}
}
}
/// Speech state
#[derive(Debug, Clone, Default)]
pub struct SpeechState {
pub config: SpeechConfig,
pub playback: PlaybackState,
pub current_text: Option<String>,
pub position_ms: u64,
pub available_voices: Vec<VoiceInfo>,
}
/// Speech Hand implementation
pub struct SpeechHand {
config: HandConfig,
state: Arc<RwLock<SpeechState>>,
}
impl SpeechHand {
/// Create a new speech hand
pub fn new() -> Self {
Self {
config: HandConfig {
id: "speech".to_string(),
name: "语音合成".to_string(),
description: "文本转语音合成输出".to_string(),
needs_approval: false,
dependencies: vec![],
input_schema: Some(serde_json::json!({
"type": "object",
"properties": {
"action": { "type": "string" },
"text": { "type": "string" },
"voice": { "type": "string" },
"rate": { "type": "number" },
}
})),
tags: vec!["audio".to_string(), "tts".to_string(), "education".to_string(), "demo".to_string()],
enabled: true,
max_concurrent: 0,
timeout_secs: 0,
},
state: Arc::new(RwLock::new(SpeechState {
config: SpeechConfig::default(),
playback: PlaybackState::Idle,
available_voices: Self::get_default_voices(),
..Default::default()
})),
}
}
/// Create with custom provider
pub fn with_provider(provider: TtsProvider) -> Self {
let hand = Self::new();
let mut state = hand.state.blocking_write();
state.config.provider = provider;
drop(state);
hand
}
/// Get default voices
fn get_default_voices() -> Vec<VoiceInfo> {
vec![
VoiceInfo {
id: "zh-CN-XiaoxiaoNeural".to_string(),
name: "Xiaoxiao".to_string(),
language: "zh-CN".to_string(),
gender: "female".to_string(),
preview_url: None,
},
VoiceInfo {
id: "zh-CN-YunxiNeural".to_string(),
name: "Yunxi".to_string(),
language: "zh-CN".to_string(),
gender: "male".to_string(),
preview_url: None,
},
VoiceInfo {
id: "en-US-JennyNeural".to_string(),
name: "Jenny".to_string(),
language: "en-US".to_string(),
gender: "female".to_string(),
preview_url: None,
},
VoiceInfo {
id: "en-US-GuyNeural".to_string(),
name: "Guy".to_string(),
language: "en-US".to_string(),
gender: "male".to_string(),
preview_url: None,
},
]
}
/// Execute a speech action
pub async fn execute_action(&self, action: SpeechAction) -> Result<HandResult> {
let mut state = self.state.write().await;
match action {
SpeechAction::Speak { text, voice, rate, pitch, volume, language } => {
let voice_id = voice.or(state.config.default_voice.clone())
.unwrap_or_else(|| "default".to_string());
let lang = language.unwrap_or_else(|| state.config.default_language.clone());
let actual_rate = if rate == 1.0 { state.config.default_rate } else { rate };
let actual_pitch = if pitch == 1.0 { state.config.default_pitch } else { pitch };
let actual_volume = if volume == 1.0 { state.config.default_volume } else { volume };
state.playback = PlaybackState::Playing;
state.current_text = Some(text.clone());
// Determine TTS method based on provider:
// - Browser: frontend uses Web Speech API (zero deps, works offline)
// - OpenAI: frontend calls speech_tts command (high-quality, needs API key)
// - Others: future support
let tts_method = match state.config.provider {
TtsProvider::Browser => "browser",
TtsProvider::OpenAI => "openai_api",
TtsProvider::Azure => "azure_api",
TtsProvider::ElevenLabs => "elevenlabs_api",
TtsProvider::Local => "local_engine",
};
let estimated_duration_ms = (text.chars().count() as f64 / 5.0 * 1000.0) as u64;
Ok(HandResult::success(serde_json::json!({
"status": "speaking",
"tts_method": tts_method,
"text": text,
"voice": voice_id,
"language": lang,
"rate": actual_rate,
"pitch": actual_pitch,
"volume": actual_volume,
"provider": format!("{:?}", state.config.provider).to_lowercase(),
"duration_ms": estimated_duration_ms,
"instruction": "Frontend should play this via TTS engine"
})))
}
SpeechAction::SpeakSsml { ssml, voice } => {
let voice_id = voice.or(state.config.default_voice.clone())
.unwrap_or_else(|| "default".to_string());
state.playback = PlaybackState::Playing;
state.current_text = Some(ssml.clone());
Ok(HandResult::success(serde_json::json!({
"status": "speaking_ssml",
"ssml": ssml,
"voice": voice_id,
"provider": state.config.provider,
})))
}
SpeechAction::Pause => {
state.playback = PlaybackState::Paused;
Ok(HandResult::success(serde_json::json!({
"status": "paused",
"position_ms": state.position_ms,
})))
}
SpeechAction::Resume => {
state.playback = PlaybackState::Playing;
Ok(HandResult::success(serde_json::json!({
"status": "resumed",
"position_ms": state.position_ms,
})))
}
SpeechAction::Stop => {
state.playback = PlaybackState::Idle;
state.current_text = None;
state.position_ms = 0;
Ok(HandResult::success(serde_json::json!({
"status": "stopped",
})))
}
SpeechAction::ListVoices { language } => {
let voices: Vec<_> = state.available_voices.iter()
.filter(|v| {
language.as_ref()
.map(|l| v.language.starts_with(l))
.unwrap_or(true)
})
.cloned()
.collect();
Ok(HandResult::success(serde_json::json!({
"voices": voices,
"count": voices.len(),
})))
}
SpeechAction::SetVoice { voice, language } => {
state.config.default_voice = Some(voice.clone());
if let Some(lang) = language {
state.config.default_language = lang;
}
Ok(HandResult::success(serde_json::json!({
"status": "voice_set",
"voice": voice,
"language": state.config.default_language,
})))
}
SpeechAction::SetProvider { provider, api_key, region: _ } => {
state.config.provider = provider.clone();
// In real implementation, would configure provider
Ok(HandResult::success(serde_json::json!({
"status": "provider_set",
"provider": provider,
"configured": api_key.is_some(),
})))
}
}
}
/// Get current state
pub async fn get_state(&self) -> SpeechState {
self.state.read().await.clone()
}
}
impl Default for SpeechHand {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl Hand for SpeechHand {
fn config(&self) -> &HandConfig {
&self.config
}
async fn execute(&self, _context: &HandContext, input: Value) -> Result<HandResult> {
let action: SpeechAction = match serde_json::from_value(input) {
Ok(a) => a,
Err(e) => {
return Ok(HandResult::error(format!("Invalid speech action: {}", e)));
}
};
self.execute_action(action).await
}
fn status(&self) -> HandStatus {
HandStatus::Idle
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_speech_creation() {
let hand = SpeechHand::new();
assert_eq!(hand.config().id, "speech");
}
#[tokio::test]
async fn test_speak() {
let hand = SpeechHand::new();
let action = SpeechAction::Speak {
text: "Hello, world!".to_string(),
voice: None,
rate: 1.0,
pitch: 1.0,
volume: 1.0,
language: None,
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
}
#[tokio::test]
async fn test_pause_resume() {
let hand = SpeechHand::new();
// Speak first
hand.execute_action(SpeechAction::Speak {
text: "Test".to_string(),
voice: None, rate: 1.0, pitch: 1.0, volume: 1.0, language: None,
}).await.unwrap();
// Pause
let result = hand.execute_action(SpeechAction::Pause).await.unwrap();
assert!(result.success);
// Resume
let result = hand.execute_action(SpeechAction::Resume).await.unwrap();
assert!(result.success);
}
#[tokio::test]
async fn test_list_voices() {
let hand = SpeechHand::new();
let action = SpeechAction::ListVoices { language: Some("zh".to_string()) };
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
}
#[tokio::test]
async fn test_set_voice() {
let hand = SpeechHand::new();
let action = SpeechAction::SetVoice {
voice: "zh-CN-XiaoxiaoNeural".to_string(),
language: Some("zh-CN".to_string()),
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
let state = hand.get_state().await;
assert_eq!(state.config.default_voice, Some("zh-CN-XiaoxiaoNeural".to_string()));
}
}

View File

@@ -1,422 +0,0 @@
//! Whiteboard Hand - Drawing and annotation capabilities
//!
//! Provides whiteboard drawing actions for teaching:
//! - draw_text: Draw text on the whiteboard
//! - draw_shape: Draw shapes (rectangle, circle, arrow, etc.)
//! - draw_line: Draw lines and curves
//! - draw_chart: Draw charts (bar, line, pie)
//! - draw_latex: Render LaTeX formulas
//! - draw_table: Draw data tables
//! - clear: Clear the whiteboard
//! - export: Export as image
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use zclaw_types::Result;
use crate::{Hand, HandConfig, HandContext, HandResult, HandStatus};
/// Whiteboard action types
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "action", rename_all = "snake_case")]
pub enum WhiteboardAction {
/// Draw text
DrawText {
x: f64,
y: f64,
text: String,
#[serde(default = "default_font_size")]
font_size: u32,
#[serde(default)]
color: Option<String>,
#[serde(default)]
font_family: Option<String>,
},
/// Draw a shape
DrawShape {
shape: ShapeType,
x: f64,
y: f64,
width: f64,
height: f64,
#[serde(default)]
fill: Option<String>,
#[serde(default)]
stroke: Option<String>,
#[serde(default = "default_stroke_width")]
stroke_width: u32,
},
/// Draw a line
DrawLine {
points: Vec<Point>,
#[serde(default)]
color: Option<String>,
#[serde(default = "default_stroke_width")]
stroke_width: u32,
},
/// Draw a chart
DrawChart {
chart_type: ChartType,
data: ChartData,
x: f64,
y: f64,
width: f64,
height: f64,
#[serde(default)]
title: Option<String>,
},
/// Draw LaTeX formula
DrawLatex {
latex: String,
x: f64,
y: f64,
#[serde(default = "default_font_size")]
font_size: u32,
#[serde(default)]
color: Option<String>,
},
/// Draw a table
DrawTable {
headers: Vec<String>,
rows: Vec<Vec<String>>,
x: f64,
y: f64,
#[serde(default)]
column_widths: Option<Vec<f64>>,
},
/// Erase area
Erase {
x: f64,
y: f64,
width: f64,
height: f64,
},
/// Clear whiteboard
Clear,
/// Undo last action
Undo,
/// Redo last undone action
Redo,
/// Export as image
Export {
#[serde(default = "default_export_format")]
format: String,
},
}
fn default_font_size() -> u32 { 16 }
fn default_stroke_width() -> u32 { 2 }
fn default_export_format() -> String { "png".to_string() }
/// Shape types
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum ShapeType {
Rectangle,
RoundedRectangle,
Circle,
Ellipse,
Triangle,
Arrow,
Star,
Checkmark,
Cross,
}
/// Point for line drawing
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Point {
pub x: f64,
pub y: f64,
}
/// Chart types
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum ChartType {
Bar,
Line,
Pie,
Scatter,
Area,
Radar,
}
/// Chart data
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChartData {
pub labels: Vec<String>,
pub datasets: Vec<Dataset>,
}
/// Dataset for charts
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Dataset {
pub label: String,
pub values: Vec<f64>,
#[serde(default)]
pub color: Option<String>,
}
/// Whiteboard state (for undo/redo)
#[derive(Debug, Clone, Default)]
pub struct WhiteboardState {
pub actions: Vec<WhiteboardAction>,
pub undone: Vec<WhiteboardAction>,
pub canvas_width: f64,
pub canvas_height: f64,
}
/// Whiteboard Hand implementation
pub struct WhiteboardHand {
config: HandConfig,
state: std::sync::Arc<tokio::sync::RwLock<WhiteboardState>>,
}
impl WhiteboardHand {
/// Create a new whiteboard hand
pub fn new() -> Self {
Self {
config: HandConfig {
id: "whiteboard".to_string(),
name: "白板".to_string(),
description: "在虚拟白板上绘制和标注".to_string(),
needs_approval: false,
dependencies: vec![],
input_schema: Some(serde_json::json!({
"type": "object",
"properties": {
"action": { "type": "string" },
"x": { "type": "number" },
"y": { "type": "number" },
"text": { "type": "string" },
}
})),
tags: vec!["presentation".to_string(), "education".to_string()],
enabled: true,
max_concurrent: 0,
timeout_secs: 0,
},
state: std::sync::Arc::new(tokio::sync::RwLock::new(WhiteboardState {
canvas_width: 1920.0,
canvas_height: 1080.0,
..Default::default()
})),
}
}
/// Create with custom canvas size
pub fn with_size(width: f64, height: f64) -> Self {
let hand = Self::new();
let mut state = hand.state.blocking_write();
state.canvas_width = width;
state.canvas_height = height;
drop(state);
hand
}
/// Execute a whiteboard action
pub async fn execute_action(&self, action: WhiteboardAction) -> Result<HandResult> {
let mut state = self.state.write().await;
match &action {
WhiteboardAction::Clear => {
state.actions.clear();
state.undone.clear();
return Ok(HandResult::success(serde_json::json!({
"status": "cleared",
"action_count": 0
})));
}
WhiteboardAction::Undo => {
if let Some(last) = state.actions.pop() {
state.undone.push(last);
return Ok(HandResult::success(serde_json::json!({
"status": "undone",
"remaining_actions": state.actions.len()
})));
}
return Ok(HandResult::success(serde_json::json!({
"status": "no_action_to_undo"
})));
}
WhiteboardAction::Redo => {
if let Some(redone) = state.undone.pop() {
state.actions.push(redone);
return Ok(HandResult::success(serde_json::json!({
"status": "redone",
"total_actions": state.actions.len()
})));
}
return Ok(HandResult::success(serde_json::json!({
"status": "no_action_to_redo"
})));
}
WhiteboardAction::Export { format } => {
// In real implementation, would render to image
return Ok(HandResult::success(serde_json::json!({
"status": "exported",
"format": format,
"data_url": format!("data:image/{};base64,<rendered_data>", format)
})));
}
_ => {
// Regular drawing action
state.actions.push(action.clone());
return Ok(HandResult::success(serde_json::json!({
"status": "drawn",
"action": action,
"total_actions": state.actions.len()
})));
}
}
}
/// Get current state
pub async fn get_state(&self) -> WhiteboardState {
self.state.read().await.clone()
}
/// Get all actions
pub async fn get_actions(&self) -> Vec<WhiteboardAction> {
self.state.read().await.actions.clone()
}
}
impl Default for WhiteboardHand {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl Hand for WhiteboardHand {
fn config(&self) -> &HandConfig {
&self.config
}
async fn execute(&self, _context: &HandContext, input: Value) -> Result<HandResult> {
// Parse action from input
let action: WhiteboardAction = match serde_json::from_value(input.clone()) {
Ok(a) => a,
Err(e) => {
return Ok(HandResult::error(format!("Invalid whiteboard action: {}", e)));
}
};
self.execute_action(action).await
}
fn status(&self) -> HandStatus {
// Check if there are any actions
HandStatus::Idle
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_whiteboard_creation() {
let hand = WhiteboardHand::new();
assert_eq!(hand.config().id, "whiteboard");
}
#[tokio::test]
async fn test_draw_text() {
let hand = WhiteboardHand::new();
let action = WhiteboardAction::DrawText {
x: 100.0,
y: 100.0,
text: "Hello World".to_string(),
font_size: 24,
color: Some("#333333".to_string()),
font_family: None,
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
let state = hand.get_state().await;
assert_eq!(state.actions.len(), 1);
}
#[tokio::test]
async fn test_draw_shape() {
let hand = WhiteboardHand::new();
let action = WhiteboardAction::DrawShape {
shape: ShapeType::Rectangle,
x: 50.0,
y: 50.0,
width: 200.0,
height: 100.0,
fill: Some("#4CAF50".to_string()),
stroke: None,
stroke_width: 2,
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
}
#[tokio::test]
async fn test_undo_redo() {
let hand = WhiteboardHand::new();
// Draw something
hand.execute_action(WhiteboardAction::DrawText {
x: 0.0, y: 0.0, text: "Test".to_string(), font_size: 16, color: None, font_family: None,
}).await.unwrap();
// Undo
let result = hand.execute_action(WhiteboardAction::Undo).await.unwrap();
assert!(result.success);
assert_eq!(hand.get_state().await.actions.len(), 0);
// Redo
let result = hand.execute_action(WhiteboardAction::Redo).await.unwrap();
assert!(result.success);
assert_eq!(hand.get_state().await.actions.len(), 1);
}
#[tokio::test]
async fn test_clear() {
let hand = WhiteboardHand::new();
// Draw something
hand.execute_action(WhiteboardAction::DrawText {
x: 0.0, y: 0.0, text: "Test".to_string(), font_size: 16, color: None, font_family: None,
}).await.unwrap();
// Clear
let result = hand.execute_action(WhiteboardAction::Clear).await.unwrap();
assert!(result.success);
assert_eq!(hand.get_state().await.actions.len(), 0);
}
#[tokio::test]
async fn test_chart() {
let hand = WhiteboardHand::new();
let action = WhiteboardAction::DrawChart {
chart_type: ChartType::Bar,
data: ChartData {
labels: vec!["A".to_string(), "B".to_string(), "C".to_string()],
datasets: vec![Dataset {
label: "Values".to_string(),
values: vec![10.0, 20.0, 15.0],
color: Some("#2196F3".to_string()),
}],
},
x: 100.0,
y: 100.0,
width: 400.0,
height: 300.0,
title: Some("Test Chart".to_string()),
};
let result = hand.execute_action(action).await.unwrap();
assert!(result.success);
}
}

View File

@@ -9,8 +9,6 @@ description = "ZCLAW kernel - central coordinator for all subsystems"
[features]
default = []
# Enable multi-agent orchestration (Director, A2A protocol)
multi-agent = ["zclaw-protocols/a2a"]
[dependencies]
zclaw-types = { workspace = true }

View File

@@ -30,7 +30,7 @@ impl Default for ApiProtocol {
///
/// This is the single source of truth for LLM configuration.
/// Model ID is passed directly to the API without any transformation.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[derive(Clone, Serialize, Deserialize)]
pub struct LlmConfig {
/// API base URL (e.g., "https://api.openai.com/v1")
pub base_url: String,
@@ -61,6 +61,20 @@ pub struct LlmConfig {
pub context_window: u32,
}
impl std::fmt::Debug for LlmConfig {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("LlmConfig")
.field("base_url", &self.base_url)
.field("api_key", &"***REDACTED***")
.field("model", &self.model)
.field("api_protocol", &self.api_protocol)
.field("max_tokens", &self.max_tokens)
.field("temperature", &self.temperature)
.field("context_window", &self.context_window)
.finish()
}
}
impl LlmConfig {
/// Create a new LLM config
pub fn new(base_url: impl Into<String>, api_key: impl Into<String>, model: impl Into<String>) -> Self {

View File

@@ -12,7 +12,7 @@
use std::sync::Arc;
use serde::{Deserialize, Serialize};
use tokio::sync::{RwLock, Mutex, mpsc};
use tokio::sync::{RwLock, Mutex, mpsc, oneshot};
use zclaw_types::{AgentId, Result, ZclawError};
use zclaw_protocols::{A2aEnvelope, A2aMessageType, A2aRecipient, A2aRouter, A2aAgentProfile, A2aCapability};
use zclaw_runtime::{LlmDriver, CompletionRequest};
@@ -199,9 +199,9 @@ pub struct Director {
director_id: AgentId,
/// Optional LLM driver for intelligent scheduling
llm_driver: Option<Arc<dyn LlmDriver>>,
/// Inbox for receiving responses (stores pending request IDs and their response channels)
pending_requests: Arc<Mutex<std::collections::HashMap<String, mpsc::Sender<A2aEnvelope>>>>,
/// Receiver for incoming messages
/// Pending request response channels (request_id → oneshot sender)
pending_requests: Arc<Mutex<std::collections::HashMap<String, oneshot::Sender<A2aEnvelope>>>>,
/// Receiver for incoming messages (consumed by inbox reader task)
inbox: Arc<Mutex<Option<mpsc::Receiver<A2aEnvelope>>>>,
}
@@ -360,7 +360,7 @@ impl Director {
use std::time::{SystemTime, UNIX_EPOCH};
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.expect("system clock is valid")
.as_nanos();
let idx = (now as usize) % agents.len();
Some(agents[idx].clone())
@@ -481,13 +481,16 @@ Respond with ONLY the number (1-{}) of the agent who should speak next. No expla
}
/// Send message to selected agent and wait for response
///
/// Uses oneshot channels to avoid deadlock: each call creates its own
/// response channel, and a shared inbox reader dispatches responses.
pub async fn send_to_agent(
&self,
agent: &DirectorAgent,
message: String,
) -> Result<String> {
// Create a response channel for this request
let (_response_tx, mut _response_rx) = mpsc::channel::<A2aEnvelope>(1);
// Create a oneshot channel for this specific request's response
let (response_tx, response_rx) = oneshot::channel::<A2aEnvelope>();
let envelope = A2aEnvelope::new(
self.director_id.clone(),
@@ -500,50 +503,32 @@ Respond with ONLY the number (1-{}) of the agent who should speak next. No expla
}),
);
// Store the request ID with its response channel
// Store the oneshot sender so the inbox reader can dispatch to it
let request_id = envelope.id.clone();
{
let mut pending = self.pending_requests.lock().await;
pending.insert(request_id.clone(), _response_tx);
pending.insert(request_id.clone(), response_tx);
}
// Send the request
self.router.route(envelope).await?;
// Wait for response with timeout
// Ensure the inbox reader is running
self.ensure_inbox_reader().await;
// Wait for response on our dedicated oneshot channel with timeout
let timeout_duration = std::time::Duration::from_secs(self.config.response_timeout);
let request_id_clone = request_id.clone();
let response = tokio::time::timeout(timeout_duration, async {
// Poll the inbox for responses
let mut inbox_guard = self.inbox.lock().await;
if let Some(ref mut rx) = *inbox_guard {
while let Some(msg) = rx.recv().await {
// Check if this is a response to our request
if msg.message_type == A2aMessageType::Response {
if let Some(ref reply_to) = msg.reply_to {
if reply_to == &request_id_clone {
// Found our response
return Some(msg);
}
}
}
// Not our response, continue waiting
// (In a real implementation, we'd re-queue non-matching messages)
}
}
None
}).await;
let response = tokio::time::timeout(timeout_duration, response_rx).await;
// Clean up pending request
// Clean up pending request (sender already consumed on success)
{
let mut pending = self.pending_requests.lock().await;
pending.remove(&request_id);
}
match response {
Ok(Some(envelope)) => {
// Extract response text from payload
Ok(Ok(envelope)) => {
let response_text = envelope.payload
.get("response")
.and_then(|v: &serde_json::Value| v.as_str())
@@ -551,7 +536,7 @@ Respond with ONLY the number (1-{}) of the agent who should speak next. No expla
.to_string();
Ok(response_text)
}
Ok(None) => {
Ok(Err(_)) => {
Err(ZclawError::Timeout("No response received".into()))
}
Err(_) => {
@@ -563,6 +548,47 @@ Respond with ONLY the number (1-{}) of the agent who should speak next. No expla
}
}
/// Ensure the inbox reader task is running.
/// The inbox reader continuously reads from the shared inbox channel
/// and dispatches each response to the correct oneshot sender.
async fn ensure_inbox_reader(&self) {
// Quick check: if inbox has already been taken, reader is running
{
let inbox = self.inbox.lock().await;
if inbox.is_none() {
return; // Reader already spawned and consumed the receiver
}
}
// Take the receiver out (only once)
let rx = {
let mut inbox = self.inbox.lock().await;
inbox.take()
};
if let Some(mut rx) = rx {
let pending = self.pending_requests.clone();
tokio::spawn(async move {
while let Some(msg) = rx.recv().await {
// Find and dispatch to the correct oneshot sender
if msg.message_type == A2aMessageType::Response {
if let Some(ref reply_to) = msg.reply_to {
let reply_to_clone = reply_to.clone();
let mut pending_guard = pending.lock().await;
if let Some(sender) = pending_guard.remove(reply_to) {
// Send the response; if receiver already dropped, request was cancelled
if sender.send(msg).is_err() {
tracing::debug!("[Director] Response dropped: receiver cancelled for reply_to={}", reply_to_clone);
}
}
}
}
// Non-response messages are dropped (notifications, etc.)
}
});
}
}
/// Broadcast message to all agents
pub async fn broadcast(&self, message: String) -> Result<()> {
let envelope = A2aEnvelope::new(
@@ -616,7 +642,9 @@ Respond with ONLY the number (1-{}) of the agent who should speak next. No expla
}
if let Some(ref user_input) = input {
context.push_str(&format!("User: {}\n\n", user_input));
context.push_str("<user_input>\n");
context.push_str(&format!("{}\n", user_input));
context.push_str("</user_input>\n\n");
}
// Add recent history
@@ -882,7 +910,9 @@ impl Director {
let prompt = format!(
r#"你是 ZCLAW 管家。请将以下用户需求拆解为 1-5 个具体子任务。
用户需求:{}
<user_request>
{}
</user_request>
请按 JSON 数组格式输出,每个元素包含:
- description: 子任务描述(中文)

View File

@@ -17,8 +17,9 @@ impl EventBus {
/// Publish an event
pub fn publish(&self, event: Event) {
// Ignore send errors (no subscribers)
let _ = self.sender.send(event);
if let Err(e) = self.sender.send(event) {
tracing::debug!("Event dropped (no subscribers or channel full): {:?}", e);
}
}
/// Subscribe to events

View File

@@ -14,7 +14,7 @@ use zclaw_types::Result;
/// HTML exporter
pub struct HtmlExporter {
/// Template name (reserved for future template support)
#[allow(dead_code)] // TODO: Implement template-based HTML export
#[allow(dead_code)] // @reserved: post-release template-based HTML export
template: String,
}

View File

@@ -490,7 +490,7 @@ impl PptxExporter {
paths.sort();
for path in paths {
let content = files.get(path).unwrap();
let content = files.get(path).expect("path comes from files.keys(), must exist");
let options = SimpleFileOptions::default()
.compression_method(zip::CompressionMethod::Deflated);

View File

@@ -243,7 +243,7 @@ fn clean_fallback_response(text: &str) -> String {
fn current_timestamp_millis() -> i64 {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.expect("system clock is valid")
.as_millis() as i64
}

View File

@@ -557,7 +557,7 @@ Use Chinese if the topic is in Chinese. Include metaphors that relate to everyda
.join("\n")
}
#[allow(dead_code)]
#[allow(dead_code)] // @reserved: instance-method convenience wrapper for static helper
fn extract_text_from_response(&self, response: &CompletionResponse) -> String {
Self::extract_text_from_response_static(response)
}
@@ -882,7 +882,7 @@ fn current_timestamp() -> i64 {
use std::time::{SystemTime, UNIX_EPOCH};
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.expect("system clock is valid")
.as_millis() as i64
}

View File

@@ -1,16 +1,10 @@
//! A2A (Agent-to-Agent) messaging
//!
//! All items in this module are gated by the `multi-agent` feature flag.
#[cfg(feature = "multi-agent")]
use zclaw_types::{AgentId, Capability, Event, Result};
#[cfg(feature = "multi-agent")]
use zclaw_protocols::{A2aAgentProfile, A2aCapability, A2aEnvelope, A2aMessageType, A2aRecipient};
#[cfg(feature = "multi-agent")]
use super::Kernel;
#[cfg(feature = "multi-agent")]
impl Kernel {
// ============================================================
// A2A (Agent-to-Agent) Messaging

View File

@@ -106,13 +106,11 @@ impl SkillExecutor for KernelSkillExecutor {
/// Inbox wrapper for A2A message receivers that supports re-queuing
/// non-matching messages instead of dropping them.
#[cfg(feature = "multi-agent")]
pub(crate) struct AgentInbox {
pub(crate) rx: tokio::sync::mpsc::Receiver<zclaw_protocols::A2aEnvelope>,
pub(crate) pending: std::collections::VecDeque<zclaw_protocols::A2aEnvelope>,
}
#[cfg(feature = "multi-agent")]
impl AgentInbox {
pub(crate) fn new(rx: tokio::sync::mpsc::Receiver<zclaw_protocols::A2aEnvelope>) -> Self {
Self { rx, pending: std::collections::VecDeque::new() }

View File

@@ -2,11 +2,8 @@
use zclaw_types::{AgentConfig, AgentId, AgentInfo, Event, Result};
#[cfg(feature = "multi-agent")]
use std::sync::Arc;
#[cfg(feature = "multi-agent")]
use tokio::sync::Mutex;
#[cfg(feature = "multi-agent")]
use super::adapters::AgentInbox;
use super::Kernel;
@@ -23,7 +20,6 @@ impl Kernel {
self.memory.save_agent(&config).await?;
// Register with A2A router for multi-agent messaging (before config is moved)
#[cfg(feature = "multi-agent")]
{
let profile = Self::agent_config_to_a2a_profile(&config);
let rx = self.a2a_router.register_agent(profile).await;
@@ -52,7 +48,6 @@ impl Kernel {
self.memory.delete_agent(id).await?;
// Unregister from A2A router
#[cfg(feature = "multi-agent")]
{
self.a2a_router.unregister_agent(id).await;
self.a2a_inboxes.remove(id);

View File

@@ -85,14 +85,14 @@ impl Kernel {
started_at: None,
completed_at: None,
};
let _ = memory.save_hand_run(&run).await.map_err(|e| {
tracing::warn!("[Approval] Failed to save hand run: {}", e);
});
if let Err(e) = memory.save_hand_run(&run).await {
tracing::error!("[Approval] Failed to save hand run: {}", e);
}
run.status = HandRunStatus::Running;
run.started_at = Some(chrono::Utc::now().to_rfc3339());
let _ = memory.update_hand_run(&run).await.map_err(|e| {
tracing::warn!("[Approval] Failed to update hand run (running): {}", e);
});
if let Err(e) = memory.update_hand_run(&run).await {
tracing::error!("[Approval] Failed to update hand run (running): {}", e);
}
// Register cancellation flag
let cancel_flag = Arc::new(std::sync::atomic::AtomicBool::new(false));
@@ -121,9 +121,9 @@ impl Kernel {
}
run.duration_ms = Some(duration.as_millis() as u64);
run.completed_at = Some(completed_at);
let _ = memory.update_hand_run(&run).await.map_err(|e| {
tracing::warn!("[Approval] Failed to update hand run (completed): {}", e);
});
if let Err(e) = memory.update_hand_run(&run).await {
tracing::error!("[Approval] Failed to update hand run (completed): {}", e);
}
// Update approval status based on execution result
let mut approvals = approvals.lock().await;

View File

@@ -25,7 +25,7 @@ impl Kernel {
agent_id: &AgentId,
message: String,
) -> Result<MessageResponse> {
self.send_message_with_chat_mode(agent_id, message, None).await
self.send_message_with_chat_mode(agent_id, message, None, None).await
}
/// Send a message to an agent with optional chat mode configuration
@@ -34,6 +34,7 @@ impl Kernel {
agent_id: &AgentId,
message: String,
chat_mode: Option<ChatModeConfig>,
model_override: Option<String>,
) -> Result<MessageResponse> {
let agent_config = self.registry.get(agent_id)
.ok_or_else(|| zclaw_types::ZclawError::NotFound(format!("Agent not found: {}", agent_id)))?;
@@ -41,12 +42,16 @@ impl Kernel {
// Create or get session
let session_id = self.memory.create_session(agent_id).await?;
// Use agent-level model if configured, otherwise fall back to global config
let model = if !agent_config.model.model.is_empty() {
// Model priority: UI override > Agent config > Global config
let model = model_override
.filter(|m| !m.is_empty())
.unwrap_or_else(|| {
if !agent_config.model.model.is_empty() {
agent_config.model.model.clone()
} else {
self.config.model().to_string()
};
}
});
// Create agent loop with model configuration
let subagent_enabled = chat_mode.as_ref().and_then(|m| m.subagent_enabled).unwrap_or(false);
@@ -78,10 +83,8 @@ impl Kernel {
loop_runner = loop_runner.with_path_validator(path_validator);
}
// Inject middleware chain if available
if let Some(chain) = self.create_middleware_chain() {
loop_runner = loop_runner.with_middleware_chain(chain);
}
// Inject middleware chain
loop_runner = loop_runner.with_middleware_chain(self.create_middleware_chain());
// Apply chat mode configuration (thinking/reasoning/plan mode)
if let Some(ref mode) = chat_mode {
@@ -122,7 +125,7 @@ impl Kernel {
agent_id: &AgentId,
message: String,
) -> Result<mpsc::Receiver<zclaw_runtime::LoopEvent>> {
self.send_message_stream_with_prompt(agent_id, message, None, None, None).await
self.send_message_stream_with_prompt(agent_id, message, None, None, None, None).await
}
/// Send a message with streaming, optional system prompt, optional session reuse,
@@ -134,6 +137,7 @@ impl Kernel {
system_prompt_override: Option<String>,
session_id_override: Option<zclaw_types::SessionId>,
chat_mode: Option<ChatModeConfig>,
model_override: Option<String>,
) -> Result<mpsc::Receiver<zclaw_runtime::LoopEvent>> {
let agent_config = self.registry.get(agent_id)
.ok_or_else(|| zclaw_types::ZclawError::NotFound(format!("Agent not found: {}", agent_id)))?;
@@ -150,12 +154,16 @@ impl Kernel {
None => self.memory.create_session(agent_id).await?,
};
// Use agent-level model if configured, otherwise fall back to global config
let model = if !agent_config.model.model.is_empty() {
// Model priority: UI override > Agent config > Global config
let model = model_override
.filter(|m| !m.is_empty())
.unwrap_or_else(|| {
if !agent_config.model.model.is_empty() {
agent_config.model.model.clone()
} else {
self.config.model().to_string()
};
}
});
// Create agent loop with model configuration
let subagent_enabled = chat_mode.as_ref().and_then(|m| m.subagent_enabled).unwrap_or(false);
@@ -188,10 +196,8 @@ impl Kernel {
loop_runner = loop_runner.with_path_validator(path_validator);
}
// Inject middleware chain if available
if let Some(chain) = self.create_middleware_chain() {
loop_runner = loop_runner.with_middleware_chain(chain);
}
// Inject middleware chain
loop_runner = loop_runner.with_middleware_chain(self.create_middleware_chain());
// Apply chat mode configuration (thinking/reasoning/plan mode from frontend)
if let Some(ref mode) = chat_mode {

View File

@@ -8,16 +8,13 @@ mod hands;
mod triggers;
mod approvals;
mod orchestration;
#[cfg(feature = "multi-agent")]
mod a2a;
use std::sync::Arc;
use tokio::sync::{broadcast, Mutex};
use zclaw_types::{Event, Result, AgentState};
#[cfg(feature = "multi-agent")]
use zclaw_types::AgentId;
#[cfg(feature = "multi-agent")]
use zclaw_protocols::A2aRouter;
use crate::registry::AgentRegistry;
@@ -27,7 +24,7 @@ use crate::config::KernelConfig;
use zclaw_memory::MemoryStore;
use zclaw_runtime::{LlmDriver, ToolRegistry, tool::SkillExecutor};
use zclaw_skills::SkillRegistry;
use zclaw_hands::{HandRegistry, hands::{BrowserHand, SlideshowHand, SpeechHand, QuizHand, WhiteboardHand, ResearcherHand, CollectorHand, ClipHand, TwitterHand, quiz::LlmQuizGenerator}};
use zclaw_hands::{HandRegistry, hands::{BrowserHand, QuizHand, ResearcherHand, CollectorHand, ClipHand, TwitterHand, ReminderHand, quiz::LlmQuizGenerator}};
pub use adapters::KernelSkillExecutor;
pub use messaging::ChatModeConfig;
@@ -56,11 +53,9 @@ pub struct Kernel {
mcp_adapters: Arc<std::sync::RwLock<Vec<zclaw_protocols::McpToolAdapter>>>,
/// Dynamic industry keyword configs — shared with Tauri frontend, loaded from SaaS
industry_keywords: Arc<tokio::sync::RwLock<Vec<zclaw_runtime::IndustryKeywordConfig>>>,
/// A2A router for inter-agent messaging (gated by multi-agent feature)
#[cfg(feature = "multi-agent")]
/// A2A router for inter-agent messaging
a2a_router: Arc<A2aRouter>,
/// Per-agent A2A inbox receivers (supports re-queuing non-matching messages)
#[cfg(feature = "multi-agent")]
a2a_inboxes: Arc<dashmap::DashMap<AgentId, Arc<Mutex<adapters::AgentInbox>>>>,
}
@@ -93,14 +88,12 @@ impl Kernel {
let quiz_model = config.model().to_string();
let quiz_generator = Arc::new(LlmQuizGenerator::new(driver.clone(), quiz_model));
hands.register(Arc::new(BrowserHand::new())).await;
hands.register(Arc::new(SlideshowHand::new())).await;
hands.register(Arc::new(SpeechHand::new())).await;
hands.register(Arc::new(QuizHand::with_generator(quiz_generator))).await;
hands.register(Arc::new(WhiteboardHand::new())).await;
hands.register(Arc::new(ResearcherHand::new())).await;
hands.register(Arc::new(CollectorHand::new())).await;
hands.register(Arc::new(ClipHand::new())).await;
hands.register(Arc::new(TwitterHand::new())).await;
hands.register(Arc::new(ReminderHand::new())).await;
// Create skill executor
let skill_executor = Arc::new(KernelSkillExecutor::new(skills.clone(), driver.clone()));
@@ -137,7 +130,6 @@ impl Kernel {
}
// Initialize A2A router for multi-agent support
#[cfg(feature = "multi-agent")]
let a2a_router = {
let kernel_agent_id = AgentId::new();
Arc::new(A2aRouter::new(kernel_agent_id))
@@ -161,9 +153,7 @@ impl Kernel {
extraction_driver: None,
mcp_adapters: Arc::new(std::sync::RwLock::new(Vec::new())),
industry_keywords: Arc::new(tokio::sync::RwLock::new(Vec::new())),
#[cfg(feature = "multi-agent")]
a2a_router,
#[cfg(feature = "multi-agent")]
a2a_inboxes: Arc::new(dashmap::DashMap::new()),
})
}
@@ -203,7 +193,7 @@ impl Kernel {
/// When middleware is configured, cross-cutting concerns (compaction, loop guard,
/// token calibration, etc.) are delegated to the chain. When no middleware is
/// registered, the legacy inline path in `AgentLoop` is used instead.
pub(crate) fn create_middleware_chain(&self) -> Option<zclaw_runtime::middleware::MiddlewareChain> {
pub(crate) fn create_middleware_chain(&self) -> zclaw_runtime::middleware::MiddlewareChain {
let mut chain = zclaw_runtime::middleware::MiddlewareChain::new();
// Butler router — semantic skill routing context injection
@@ -249,6 +239,9 @@ impl Kernel {
}
// Data masking middleware — mask sensitive entities before any other processing
// NOTE: Registration order does NOT determine execution order.
// The chain sorts by priority() ascending before execution.
// Execution order: Evolution(78) → ButlerRouter(80) → DataMasking(90) → ...
{
use std::sync::Arc;
let masker = Arc::new(zclaw_runtime::middleware::data_masking::DataMasker::new());
@@ -262,6 +255,13 @@ impl Kernel {
growth = growth.with_llm_driver(driver.clone());
}
// Evolution middleware — pushes evolution candidate skills into system prompt
// priority=78, executed first by chain (before ButlerRouter@80)
let evolution_mw = std::sync::Arc::new(
zclaw_runtime::middleware::evolution::EvolutionMiddleware::new()
);
chain.register(evolution_mw.clone());
// Compaction middleware — only register when threshold > 0
let threshold = self.config.compaction_threshold();
if threshold > 0 {
@@ -279,10 +279,11 @@ impl Kernel {
chain.register(Arc::new(mw));
}
// Memory middleware — auto-extract memories after conversations
// Memory middleware — auto-extract memories + check evolution after conversations
{
use std::sync::Arc;
let mw = zclaw_runtime::middleware::memory::MemoryMiddleware::new(growth);
let mw = zclaw_runtime::middleware::memory::MemoryMiddleware::new(growth)
.with_evolution(evolution_mw);
chain.register(Arc::new(mw));
}
@@ -353,13 +354,19 @@ impl Kernel {
chain.register(Arc::new(mw));
}
// Only return Some if we actually registered middleware
if chain.is_empty() {
None
} else {
tracing::info!("[Kernel] Middleware chain created with {} middlewares", chain.len());
Some(chain)
// Trajectory recorder — record agent loop events for Hermes analysis
{
use std::sync::Arc;
let tstore = zclaw_memory::trajectory_store::TrajectoryStore::new(self.memory.pool());
let mw = zclaw_runtime::middleware::trajectory_recorder::TrajectoryRecorderMiddleware::new(Arc::new(tstore));
chain.register(Arc::new(mw));
}
// Always return the chain (empty chain is a no-op)
if !chain.is_empty() {
tracing::info!("[Kernel] Middleware chain created with {} middlewares", chain.len());
}
chain
}
/// Subscribe to events

View File

@@ -10,7 +10,6 @@ pub mod trigger_manager;
pub mod config;
pub mod scheduler;
pub mod skill_router;
#[cfg(feature = "multi-agent")]
pub mod director;
pub mod generation;
pub mod export;
@@ -21,13 +20,11 @@ pub use capabilities::*;
pub use events::*;
pub use config::*;
pub use trigger_manager::{TriggerManager, TriggerEntry, TriggerUpdateRequest, TriggerManagerConfig};
#[cfg(feature = "multi-agent")]
pub use director::{
Director, DirectorConfig, DirectorBuilder, DirectorAgent,
ConversationState, ScheduleStrategy,
// Note: AgentRole is intentionally NOT re-exported here — use generation::AgentRole instead
};
#[cfg(feature = "multi-agent")]
pub use zclaw_protocols::{
A2aRouter, A2aAgentProfile, A2aCapability, A2aEnvelope, A2aMessageType, A2aRecipient,
A2aReceiver,

View File

@@ -77,7 +77,7 @@ impl SchedulerService {
kernel_lock: &Arc<Mutex<Option<Kernel>>>,
) -> Result<()> {
// Collect due triggers under lock
let to_execute: Vec<(String, String, String)> = {
let to_execute: Vec<(String, String, String, String)> = {
let kernel_guard = kernel_lock.lock().await;
let kernel = match kernel_guard.as_ref() {
Some(k) => k,
@@ -103,7 +103,8 @@ impl SchedulerService {
.filter_map(|t| {
if let zclaw_hands::TriggerType::Schedule { ref cron } = t.config.trigger_type {
if Self::should_fire_cron(cron, &now) {
Some((t.config.id.clone(), t.config.hand_id.clone(), cron.clone()))
// (trigger_id, hand_id, cron_expr, trigger_name)
Some((t.config.id.clone(), t.config.hand_id.clone(), cron.clone(), t.config.name.clone()))
} else {
None
}
@@ -123,7 +124,7 @@ impl SchedulerService {
// If parallel execution is needed, spawn each execute_hand in a separate task
// and collect results via JoinSet.
let now = chrono::Utc::now();
for (trigger_id, hand_id, cron_expr) in to_execute {
for (trigger_id, hand_id, cron_expr, trigger_name) in to_execute {
tracing::info!(
"[Scheduler] Firing scheduled trigger '{}' → hand '{}' (cron: {})",
trigger_id, hand_id, cron_expr
@@ -138,6 +139,7 @@ impl SchedulerService {
let input = serde_json::json!({
"trigger_id": trigger_id,
"trigger_type": "schedule",
"task_description": trigger_name,
"cron": cron_expr,
"fired_at": now.to_rfc3339(),
});

View File

@@ -134,7 +134,9 @@ impl TriggerManager {
/// Create a new trigger
pub async fn create_trigger(&self, config: TriggerConfig) -> Result<TriggerEntry> {
// Validate hand exists (outside of our lock to avoid holding two locks)
if self.hand_registry.get(&config.hand_id).await.is_none() {
// System hands (prefixed with '_') are exempt from validation — they are
// registered at boot but may not appear in the hand registry scan path.
if !config.hand_id.starts_with('_') && self.hand_registry.get(&config.hand_id).await.is_none() {
return Err(zclaw_types::ZclawError::InvalidInput(
format!("Hand '{}' not found", config.hand_id)
));
@@ -170,7 +172,7 @@ impl TriggerManager {
) -> Result<TriggerEntry> {
// Validate hand exists if being updated (outside of our lock)
if let Some(hand_id) = &updates.hand_id {
if self.hand_registry.get(hand_id).await.is_none() {
if !hand_id.starts_with('_') && self.hand_registry.get(hand_id).await.is_none() {
return Err(zclaw_types::ZclawError::InvalidInput(
format!("Hand '{}' not found", hand_id)
));
@@ -303,9 +305,10 @@ impl TriggerManager {
};
// Get hand (outside of our lock to avoid potential deadlock with hand_registry)
// System hands (prefixed with '_') must be registered at boot — same rule as create_trigger.
let hand = self.hand_registry.get(&hand_id).await
.ok_or_else(|| zclaw_types::ZclawError::InvalidInput(
format!("Hand '{}' not found", hand_id)
format!("Hand '{}' not found (system hands must be registered at boot)", hand_id)
))?;
// Update state before execution

View File

@@ -21,6 +21,14 @@ impl MemoryStore {
Ok(store)
}
/// Get a clone of the underlying SQLite pool.
///
/// Used by subsystems (e.g. `TrajectoryStore`) that need to share the
/// same database connection pool for their own tables.
pub fn pool(&self) -> SqlitePool {
self.pool.clone()
}
/// Ensure the parent directory for the database file exists
fn ensure_database_dir(database_url: &str) -> Result<()> {
// Parse SQLite URL to extract file path

View File

@@ -25,7 +25,6 @@ reqwest = { workspace = true }
# Internal crates
zclaw-types = { workspace = true }
zclaw-runtime = { workspace = true }
zclaw-kernel = { workspace = true }
zclaw-skills = { workspace = true }
zclaw-hands = { workspace = true }

View File

@@ -589,7 +589,7 @@ impl StageEngine {
}
/// Clone with drivers (reserved for future use)
#[allow(dead_code)]
#[allow(dead_code)] // @reserved: post-release stage cloning with drivers
fn clone_with_drivers(&self) -> Self {
Self {
llm_driver: self.llm_driver.clone(),

View File

@@ -40,6 +40,15 @@ pub enum ExecuteError {
Io(#[from] std::io::Error),
}
/// Maximum completed/failed/cancelled runs to keep in memory
const MAX_COMPLETED_RUNS: usize = 100;
/// Maximum allowed delay in milliseconds (60 seconds)
const MAX_DELAY_MS: u64 = 60_000;
/// Default per-step timeout (5 minutes)
const DEFAULT_STEP_TIMEOUT_SECS: u64 = 300;
/// Pipeline executor
pub struct PipelineExecutor {
/// Action registry
@@ -107,10 +116,18 @@ impl PipelineExecutor {
// Create execution context
let mut context = ExecutionContext::new(inputs);
// Determine per-step timeout from pipeline spec (0 means use default)
let step_timeout = if pipeline.spec.timeout_secs > 0 {
pipeline.spec.timeout_secs
} else {
DEFAULT_STEP_TIMEOUT_SECS
};
// Execute steps
let result = self.execute_steps(pipeline, &mut context, &run_id).await;
let result = self.execute_steps(pipeline, &mut context, &run_id, step_timeout).await;
// Update run state
let return_value = {
let mut runs = self.runs.write().await;
if let Some(run) = runs.get_mut(&run_id) {
match result {
@@ -124,18 +141,25 @@ impl PipelineExecutor {
}
}
run.ended_at = Some(Utc::now());
return Ok(run.clone());
}
Ok(run.clone())
} else {
Err(ExecuteError::Action("执行后未找到运行记录".to_string()))
}
};
/// Execute pipeline steps
// Auto-cleanup old completed runs (after releasing the write lock)
self.cleanup().await;
return_value
}
/// Execute pipeline steps with per-step timeout
async fn execute_steps(
&self,
pipeline: &Pipeline,
context: &mut ExecutionContext,
run_id: &str,
step_timeout_secs: u64,
) -> Result<HashMap<String, Value>, ExecuteError> {
let total_steps = pipeline.spec.steps.len();
@@ -161,8 +185,15 @@ impl PipelineExecutor {
tracing::info!("Executing step {} ({}/{})", step.id, idx + 1, total_steps);
// Execute action
let result = self.execute_action(&step.action, context).await?;
// Execute action with per-step timeout
let timeout_duration = std::time::Duration::from_secs(step_timeout_secs);
let result = tokio::time::timeout(
timeout_duration,
self.execute_action(&step.action, context),
).await.map_err(|_| {
tracing::error!("Step {} timed out after {}s", step.id, step_timeout_secs);
ExecuteError::Timeout
})??;
// Store result
context.set_output(&step.id, result.clone());
@@ -336,7 +367,16 @@ impl PipelineExecutor {
}
Action::Delay { ms } => {
tokio::time::sleep(tokio::time::Duration::from_millis(*ms)).await;
let capped_ms = if *ms > MAX_DELAY_MS {
tracing::warn!(
"Delay ms {} exceeds max {}, capping to {}",
ms, MAX_DELAY_MS, MAX_DELAY_MS
);
MAX_DELAY_MS
} else {
*ms
};
tokio::time::sleep(tokio::time::Duration::from_millis(capped_ms)).await;
Ok(Value::Null)
}
@@ -508,6 +548,33 @@ impl PipelineExecutor {
pub async fn list_runs(&self) -> Vec<PipelineRun> {
self.runs.read().await.values().cloned().collect()
}
/// Clean up old completed/failed/cancelled runs to prevent memory leaks.
/// Keeps at most MAX_COMPLETED_RUNS finished runs, evicting the oldest first.
pub async fn cleanup(&self) {
let mut runs = self.runs.write().await;
// Collect IDs of finished runs (completed, failed, cancelled)
let mut finished: Vec<(String, chrono::DateTime<Utc>)> = runs
.iter()
.filter(|(_, r)| matches!(r.status, RunStatus::Completed | RunStatus::Failed | RunStatus::Cancelled))
.map(|(id, r)| (id.clone(), r.ended_at.unwrap_or(r.started_at)))
.collect();
let to_remove = finished.len().saturating_sub(MAX_COMPLETED_RUNS);
if to_remove > 0 {
// Sort by end time ascending (oldest first)
finished.sort_by_key(|(_, t)| *t);
for (id, _) in finished.into_iter().take(to_remove) {
runs.remove(&id);
// Also clean up cancellation flag
drop(runs);
self.cancellations.write().await.remove(&id);
runs = self.runs.write().await;
}
tracing::debug!("Cleaned up {} old pipeline runs", to_remove);
}
}
}
#[cfg(test)]

View File

@@ -48,7 +48,7 @@ impl ExecutionContext {
steps_output: HashMap::new(),
variables: HashMap::new(),
loop_context: None,
expr_regex: Regex::new(r"\$\{([^}]+)\}").unwrap(),
expr_regex: Regex::new(r"\$\{([^}]+)\}").expect("static regex is valid"),
}
}
@@ -73,7 +73,7 @@ impl ExecutionContext {
steps_output,
variables,
loop_context: None,
expr_regex: Regex::new(r"\$\{([^}]+)\}").unwrap(),
expr_regex: Regex::new(r"\$\{([^}]+)\}").expect("static regex is valid"),
}
}

View File

@@ -1,20 +1,15 @@
//! ZCLAW Protocols
//!
//! Protocol support for MCP (Model Context Protocol) and A2A (Agent-to-Agent).
//!
//! A2A is gated behind the `a2a` feature flag (reserved for future multi-agent scenarios).
//! MCP is always available as a framework for tool integration.
mod mcp;
mod mcp_types;
mod mcp_tool_adapter;
mod mcp_transport;
#[cfg(feature = "a2a")]
mod a2a;
pub use mcp::*;
pub use mcp_types::*;
pub use mcp_tool_adapter::*;
pub use mcp_transport::*;
#[cfg(feature = "a2a")]
pub use a2a::*;

View File

@@ -130,7 +130,7 @@ impl McpToolAdapter {
match result.len() {
0 => Ok(Value::Null),
1 => Ok(result.into_iter().next().unwrap()),
1 => Ok(result.into_iter().next().unwrap_or(Value::Null)),
_ => Ok(Value::Array(result)),
}
}
@@ -160,7 +160,7 @@ impl McpServiceManager {
let adapters = McpToolAdapter::from_server(name.clone(), client.clone()).await?;
self.clients.insert(name.clone(), client);
self.adapters.insert(name.clone(), adapters);
Ok(self.adapters.get(&name).unwrap().iter().collect())
Ok(self.adapters.get(&name).map(|v| v.iter().collect()).unwrap_or_default())
}
/// Get all registered tool adapters from all services

View File

@@ -84,12 +84,20 @@ impl McpServerConfig {
}
}
/// Combined transport handles (stdin + stdout) behind a single Mutex.
/// This ensures write-then-read is atomic, preventing concurrent requests
/// from receiving each other's responses.
struct TransportHandles {
stdin: BufWriter<ChildStdin>,
stdout: BufReader<ChildStdout>,
}
/// MCP Transport using stdio
pub struct McpTransport {
config: McpServerConfig,
child: Arc<Mutex<Option<Child>>>,
stdin: Arc<Mutex<Option<BufWriter<ChildStdin>>>>,
stdout: Arc<Mutex<Option<BufReader<ChildStdout>>>>,
/// Single Mutex protecting both stdin and stdout for atomic write-then-read
handles: Arc<Mutex<Option<TransportHandles>>>,
capabilities: Arc<Mutex<Option<ServerCapabilities>>>,
}
@@ -99,8 +107,7 @@ impl McpTransport {
Self {
config,
child: Arc::new(Mutex::new(None)),
stdin: Arc::new(Mutex::new(None)),
stdout: Arc::new(Mutex::new(None)),
handles: Arc::new(Mutex::new(None)),
capabilities: Arc::new(Mutex::new(None)),
}
}
@@ -162,9 +169,11 @@ impl McpTransport {
});
}
// Store handles in separate mutexes
*self.stdin.lock().await = Some(BufWriter::new(stdin));
*self.stdout.lock().await = Some(BufReader::new(stdout));
// Store handles in single mutex for atomic write-then-read
*self.handles.lock().await = Some(TransportHandles {
stdin: BufWriter::new(stdin),
stdout: BufReader::new(stdout),
});
*child_guard = Some(child);
Ok(())
@@ -201,21 +210,21 @@ impl McpTransport {
let line = serde_json::to_string(notification)
.map_err(|e| ZclawError::McpError(format!("Failed to serialize notification: {}", e)))?;
let mut stdin_guard = self.stdin.lock().await;
let stdin = stdin_guard.as_mut()
let mut handles_guard = self.handles.lock().await;
let handles = handles_guard.as_mut()
.ok_or_else(|| ZclawError::McpError("Transport not started".to_string()))?;
stdin.write_all(line.as_bytes())
handles.stdin.write_all(line.as_bytes())
.map_err(|e| ZclawError::McpError(format!("Failed to write notification: {}", e)))?;
stdin.write_all(b"\n")
handles.stdin.write_all(b"\n")
.map_err(|e| ZclawError::McpError(format!("Failed to write newline: {}", e)))?;
stdin.flush()
handles.stdin.flush()
.map_err(|e| ZclawError::McpError(format!("Failed to flush notification: {}", e)))?;
Ok(())
}
/// Send JSON-RPC request
/// Send JSON-RPC request (atomic write-then-read under single lock)
async fn send_request<T: DeserializeOwned>(
&self,
method: &str,
@@ -234,28 +243,23 @@ impl McpTransport {
let line = serde_json::to_string(&request)
.map_err(|e| ZclawError::McpError(format!("Failed to serialize request: {}", e)))?;
// Write to stdin
{
let mut stdin_guard = self.stdin.lock().await;
let stdin = stdin_guard.as_mut()
.ok_or_else(|| ZclawError::McpError("Transport not started".to_string()))?;
stdin.write_all(line.as_bytes())
.map_err(|e| ZclawError::McpError(format!("Failed to write request: {}", e)))?;
stdin.write_all(b"\n")
.map_err(|e| ZclawError::McpError(format!("Failed to write newline: {}", e)))?;
stdin.flush()
.map_err(|e| ZclawError::McpError(format!("Failed to flush request: {}", e)))?;
}
// Read from stdout
// Atomic write-then-read under single lock
let response_line = {
let mut stdout_guard = self.stdout.lock().await;
let stdout = stdout_guard.as_mut()
let mut handles_guard = self.handles.lock().await;
let handles = handles_guard.as_mut()
.ok_or_else(|| ZclawError::McpError("Transport not started".to_string()))?;
// Write to stdin
handles.stdin.write_all(line.as_bytes())
.map_err(|e| ZclawError::McpError(format!("Failed to write request: {}", e)))?;
handles.stdin.write_all(b"\n")
.map_err(|e| ZclawError::McpError(format!("Failed to write newline: {}", e)))?;
handles.stdin.flush()
.map_err(|e| ZclawError::McpError(format!("Failed to flush request: {}", e)))?;
// Read from stdout (still holding the lock — no interleaving possible)
let mut response_line = String::new();
stdout.read_line(&mut response_line)
handles.stdout.read_line(&mut response_line)
.map_err(|e| ZclawError::McpError(format!("Failed to read response: {}", e)))?;
response_line
};
@@ -429,7 +433,7 @@ impl Drop for McpTransport {
let _ = child.wait();
}
Err(e) => {
eprintln!("[McpTransport] Failed to kill child process: {}", e);
tracing::warn!("[McpTransport] Failed to kill child process (potential zombie): {}", e);
}
}
}

View File

@@ -0,0 +1,55 @@
//! Tests for MCP Transport configuration (McpServerConfig)
//!
//! These tests cover McpServerConfig builder methods without spawning processes.
use std::collections::HashMap;
use zclaw_protocols::McpServerConfig;
#[test]
fn npx_config_creates_correct_command() {
let config = McpServerConfig::npx("@modelcontextprotocol/server-memory");
assert_eq!(config.command, "npx");
assert_eq!(config.args, vec!["-y", "@modelcontextprotocol/server-memory"]);
assert!(config.env.is_empty());
assert!(config.cwd.is_none());
}
#[test]
fn node_config_creates_correct_command() {
let config = McpServerConfig::node("/path/to/server.js");
assert_eq!(config.command, "node");
assert_eq!(config.args, vec!["/path/to/server.js"]);
}
#[test]
fn python_config_creates_correct_command() {
let config = McpServerConfig::python("mcp_server.py");
assert_eq!(config.command, "python");
assert_eq!(config.args, vec!["mcp_server.py"]);
}
#[test]
fn env_adds_variables() {
let config = McpServerConfig::node("server.js")
.env("API_KEY", "secret123")
.env("DEBUG", "true");
assert_eq!(config.env.get("API_KEY").unwrap(), "secret123");
assert_eq!(config.env.get("DEBUG").unwrap(), "true");
}
#[test]
fn cwd_sets_working_directory() {
let config = McpServerConfig::node("server.js").cwd("/tmp/work");
assert_eq!(config.cwd.unwrap(), "/tmp/work");
}
#[test]
fn combined_builder_pattern() {
let config = McpServerConfig::npx("@scope/server")
.env("PORT", "3000")
.cwd("/app");
assert_eq!(config.command, "npx");
assert_eq!(config.args.len(), 2);
assert_eq!(config.env.len(), 1);
assert_eq!(config.cwd.unwrap(), "/app");
}

View File

@@ -0,0 +1,186 @@
//! Tests for MCP domain types (mcp.rs) — McpTool, McpContent, McpResource, etc.
use std::collections::HashMap;
use zclaw_protocols::*;
// === McpTool ===
#[test]
fn mcp_tool_roundtrip() {
let tool = McpTool {
name: "search".to_string(),
description: "Search documents".to_string(),
input_schema: serde_json::json!({"type": "object", "properties": {"query": {"type": "string"}}}),
};
let json = serde_json::to_string(&tool).unwrap();
let parsed: McpTool = serde_json::from_str(&json).unwrap();
assert_eq!(parsed.name, "search");
assert_eq!(parsed.description, "Search documents");
}
#[test]
fn mcp_tool_empty_description() {
let tool = McpTool {
name: "ping".to_string(),
description: String::new(),
input_schema: serde_json::json!({}),
};
let parsed: McpTool = serde_json::from_str(&serde_json::to_string(&tool).unwrap()).unwrap();
assert!(parsed.description.is_empty());
}
// === McpContent ===
#[test]
fn mcp_content_text_roundtrip() {
let content = McpContent::Text { text: "hello".to_string() };
let json = serde_json::to_string(&content).unwrap();
let parsed: McpContent = serde_json::from_str(&json).unwrap();
match parsed {
McpContent::Text { text } => assert_eq!(text, "hello"),
_ => panic!("Expected Text"),
}
}
#[test]
fn mcp_content_image_roundtrip() {
let content = McpContent::Image {
data: "base64==".to_string(),
mime_type: "image/png".to_string(),
};
let json = serde_json::to_string(&content).unwrap();
let parsed: McpContent = serde_json::from_str(&json).unwrap();
match parsed {
McpContent::Image { data, mime_type } => {
assert_eq!(data, "base64==");
assert_eq!(mime_type, "image/png");
}
_ => panic!("Expected Image"),
}
}
#[test]
fn mcp_content_resource_roundtrip() {
let content = McpContent::Resource {
resource: McpResourceContent {
uri: "file:///test.txt".to_string(),
mime_type: Some("text/plain".to_string()),
text: Some("content".to_string()),
blob: None,
},
};
let json = serde_json::to_string(&content).unwrap();
let parsed: McpContent = serde_json::from_str(&json).unwrap();
match parsed {
McpContent::Resource { resource } => {
assert_eq!(resource.uri, "file:///test.txt");
assert_eq!(resource.text.unwrap(), "content");
}
_ => panic!("Expected Resource"),
}
}
// === McpToolCallRequest ===
#[test]
fn mcp_tool_call_request_serialization() {
let mut args = HashMap::new();
args.insert("query".to_string(), serde_json::json!("test"));
let req = McpToolCallRequest {
name: "search".to_string(),
arguments: args,
};
let json = serde_json::to_string(&req).unwrap();
assert!(json.contains("\"name\":\"search\""));
assert!(json.contains("\"query\":\"test\""));
}
// === McpToolCallResponse ===
#[test]
fn mcp_tool_call_response_parse_success() {
let json = r#"{"content":[{"type":"text","text":"found 3 results"}],"is_error":false}"#;
let resp: McpToolCallResponse = serde_json::from_str(json).unwrap();
assert!(!resp.is_error);
assert_eq!(resp.content.len(), 1);
}
#[test]
fn mcp_tool_call_response_parse_error() {
let json = r#"{"content":[{"type":"text","text":"tool not found"}],"is_error":true}"#;
let resp: McpToolCallResponse = serde_json::from_str(json).unwrap();
assert!(resp.is_error);
}
// === McpResource ===
#[test]
fn mcp_resource_roundtrip() {
let res = McpResource {
uri: "file:///doc.md".to_string(),
name: "Documentation".to_string(),
description: Some("Project docs".to_string()),
mime_type: Some("text/markdown".to_string()),
};
let json = serde_json::to_string(&res).unwrap();
let parsed: McpResource = serde_json::from_str(&json).unwrap();
assert_eq!(parsed.uri, "file:///doc.md");
assert_eq!(parsed.description.unwrap(), "Project docs");
}
// === McpPrompt ===
#[test]
fn mcp_prompt_roundtrip() {
let prompt = McpPrompt {
name: "summarize".to_string(),
description: "Summarize text".to_string(),
arguments: vec![
McpPromptArgument {
name: "length".to_string(),
description: "Target length".to_string(),
required: false,
},
],
};
let json = serde_json::to_string(&prompt).unwrap();
let parsed: McpPrompt = serde_json::from_str(&json).unwrap();
assert_eq!(parsed.arguments.len(), 1);
assert!(!parsed.arguments[0].required);
}
// === McpServerInfo ===
#[test]
fn mcp_server_info_roundtrip() {
let info = McpServerInfo {
name: "test-mcp".to_string(),
version: "2.0.0".to_string(),
protocol_version: "2024-11-05".to_string(),
};
let json = serde_json::to_string(&info).unwrap();
let parsed: McpServerInfo = serde_json::from_str(&json).unwrap();
assert_eq!(parsed.name, "test-mcp");
assert_eq!(parsed.protocol_version, "2024-11-05");
}
// === McpCapabilities ===
#[test]
fn mcp_capabilities_default_empty() {
let caps = McpCapabilities::default();
assert!(caps.tools.is_none());
assert!(caps.resources.is_none());
assert!(caps.prompts.is_none());
}
#[test]
fn mcp_capabilities_with_tools() {
let caps = McpCapabilities {
tools: Some(McpToolCapabilities { list_changed: true }),
resources: None,
prompts: None,
};
let json = serde_json::to_string(&caps).unwrap();
assert!(json.contains("\"list_changed\":true"));
}

View File

@@ -0,0 +1,267 @@
//! Tests for MCP JSON-RPC types (mcp_types.rs)
//!
//! Covers: serialization, deserialization, builder patterns, edge cases.
use serde_json;
use zclaw_protocols::*;
// === JsonRpcRequest ===
#[test]
fn jsonrpc_request_new_has_correct_defaults() {
let req = JsonRpcRequest::new(42, "tools/list");
assert_eq!(req.jsonrpc, "2.0");
assert_eq!(req.id, 42);
assert_eq!(req.method, "tools/list");
assert!(req.params.is_none());
}
#[test]
fn jsonrpc_request_with_params() {
let req = JsonRpcRequest::new(1, "tools/call")
.with_params(serde_json::json!({"name": "search"}));
let serialized = serde_json::to_string(&req).unwrap();
assert!(serialized.contains("\"params\""));
assert!(serialized.contains("\"name\":\"search\""));
}
#[test]
fn jsonrpc_request_skip_null_params() {
let req = JsonRpcRequest::new(1, "ping");
let serialized = serde_json::to_string(&req).unwrap();
// params is None, should be skipped
assert!(!serialized.contains("\"params\""));
}
// === JsonRpcResponse ===
#[test]
fn jsonrpc_response_parse_success() {
let json = r#"{"jsonrpc":"2.0","id":1,"result":{"tools":[]}}"#;
let resp: JsonRpcResponse = serde_json::from_str(json).unwrap();
assert_eq!(resp.id, 1);
assert!(resp.result.is_some());
assert!(resp.error.is_none());
}
#[test]
fn jsonrpc_response_parse_error() {
let json = r#"{"jsonrpc":"2.0","id":2,"error":{"code":-32600,"message":"Invalid Request"}}"#;
let resp: JsonRpcResponse = serde_json::from_str(json).unwrap();
assert_eq!(resp.id, 2);
assert!(resp.result.is_none());
let err = resp.error.unwrap();
assert_eq!(err.code, -32600);
assert_eq!(err.message, "Invalid Request");
}
#[test]
fn jsonrpc_response_parse_error_with_data() {
let json = r#"{"jsonrpc":"2.0","id":3,"error":{"code":-32602,"message":"Bad params","data":{"field":"uri"}}}"#;
let resp: JsonRpcResponse = serde_json::from_str(json).unwrap();
let err = resp.error.unwrap();
assert!(err.data.is_some());
assert_eq!(err.data.unwrap()["field"], "uri");
}
// === InitializeRequest ===
#[test]
fn initialize_request_default() {
let req = InitializeRequest::default();
assert_eq!(req.protocol_version, "2024-11-05");
assert_eq!(req.client_info.name, "zclaw");
assert!(!req.client_info.version.is_empty());
}
#[test]
fn initialize_request_serializes() {
let req = InitializeRequest::default();
let json = serde_json::to_string(&req).unwrap();
assert!(json.contains("\"protocol_version\":\"2024-11-05\""));
assert!(json.contains("\"client_info\""));
}
// === ServerCapabilities ===
#[test]
fn server_capabilities_empty() {
let json = r#"{"protocol_version":"2024-11-05","capabilities":{},"server_info":{"name":"test","version":"1.0"}}"#;
let result: InitializeResult = serde_json::from_str(json).unwrap();
assert!(result.capabilities.tools.is_none());
assert!(result.capabilities.resources.is_none());
}
#[test]
fn server_capabilities_with_tools() {
let json = r#"{"protocol_version":"2024-11-05","capabilities":{"tools":{"list_changed":true}},"server_info":{"name":"test","version":"1.0"}}"#;
let result: InitializeResult = serde_json::from_str(json).unwrap();
let tools = result.capabilities.tools.unwrap();
assert!(tools.list_changed);
}
// === ContentBlock ===
#[test]
fn content_block_text() {
let json = r#"{"type":"text","text":"hello world"}"#;
let block: ContentBlock = serde_json::from_str(json).unwrap();
match block {
ContentBlock::Text { text } => assert_eq!(text, "hello world"),
_ => panic!("Expected Text variant"),
}
}
#[test]
fn content_block_image() {
let json = r#"{"type":"image","data":"base64data","mime_type":"image/png"}"#;
let block: ContentBlock = serde_json::from_str(json).unwrap();
match block {
ContentBlock::Image { data, mime_type } => {
assert_eq!(data, "base64data");
assert_eq!(mime_type, "image/png");
}
_ => panic!("Expected Image variant"),
}
}
#[test]
fn content_block_resource() {
let json = r#"{"type":"resource","resource":{"uri":"file:///test.txt","text":"content"}}"#;
let block: ContentBlock = serde_json::from_str(json).unwrap();
match block {
ContentBlock::Resource { resource } => {
assert_eq!(resource.uri, "file:///test.txt");
assert_eq!(resource.text.unwrap(), "content");
}
_ => panic!("Expected Resource variant"),
}
}
// === CallToolResult ===
#[test]
fn call_tool_result_parse() {
let json = r#"{"content":[{"type":"text","text":"result"}],"is_error":false}"#;
let result: CallToolResult = serde_json::from_str(json).unwrap();
assert!(!result.is_error);
assert_eq!(result.content.len(), 1);
}
#[test]
fn call_tool_result_error() {
let json = r#"{"content":[{"type":"text","text":"something went wrong"}],"is_error":true}"#;
let result: CallToolResult = serde_json::from_str(json).unwrap();
assert!(result.is_error);
}
// === ListToolsResult ===
#[test]
fn list_tools_result_with_cursor() {
let json = r#"{"tools":[{"name":"search","input_schema":{"type":"object"}}],"next_cursor":"abc123"}"#;
let result: ListToolsResult = serde_json::from_str(json).unwrap();
assert_eq!(result.tools.len(), 1);
assert_eq!(result.tools[0].name, "search");
assert_eq!(result.next_cursor.unwrap(), "abc123");
}
#[test]
fn list_tools_result_without_cursor() {
let json = r#"{"tools":[]}"#;
let result: ListToolsResult = serde_json::from_str(json).unwrap();
assert!(result.tools.is_empty());
assert!(result.next_cursor.is_none());
}
// === Resource types ===
#[test]
fn resource_parse_with_optional_fields() {
let json = r#"{"uri":"file:///doc.txt","name":"doc","description":"A doc","mime_type":"text/plain"}"#;
let res: Resource = serde_json::from_str(json).unwrap();
assert_eq!(res.uri, "file:///doc.txt");
assert_eq!(res.name, "doc");
assert_eq!(res.description.unwrap(), "A doc");
assert_eq!(res.mime_type.unwrap(), "text/plain");
}
#[test]
fn resource_parse_minimal() {
let json = r#"{"uri":"file:///x","name":"x"}"#;
let res: Resource = serde_json::from_str(json).unwrap();
assert!(res.description.is_none());
assert!(res.mime_type.is_none());
}
// === LoggingLevel ===
#[test]
fn logging_level_serialize_roundtrip() {
let levels = vec![
LoggingLevel::Debug,
LoggingLevel::Info,
LoggingLevel::Warning,
LoggingLevel::Error,
LoggingLevel::Critical,
LoggingLevel::Emergency,
];
for level in levels {
let json = serde_json::to_string(&level).unwrap();
let parsed: LoggingLevel = serde_json::from_str(&json).unwrap();
assert_eq!(std::mem::discriminant(&level), std::mem::discriminant(&parsed));
}
}
// === InitializedNotification ===
#[test]
fn initialized_notification_fields() {
let n = InitializedNotification::new();
assert_eq!(n.jsonrpc, "2.0");
assert_eq!(n.method, "notifications/initialized");
}
#[test]
fn initialized_notification_serializes() {
let n = InitializedNotification::default();
let json = serde_json::to_string(&n).unwrap();
assert!(json.contains("\"notifications/initialized\""));
}
// === Prompt types ===
#[test]
fn prompt_parse_with_arguments() {
let json = r#"{"name":"greet","description":"Greeting","arguments":[{"name":"lang","description":"Language","required":true}]}"#;
let prompt: Prompt = serde_json::from_str(json).unwrap();
assert_eq!(prompt.name, "greet");
assert_eq!(prompt.arguments.len(), 1);
assert!(prompt.arguments[0].required);
}
#[test]
fn prompt_message_parse() {
let json = r#"{"role":"user","content":{"type":"text","text":"hello"}}"#;
let msg: PromptMessage = serde_json::from_str(json).unwrap();
assert_eq!(msg.role, "user");
}
// === McpClientConfig ===
#[test]
fn mcp_client_config_roundtrip() {
let config = McpClientConfig {
server_url: "http://localhost:3000".to_string(),
server_info: McpServerInfo {
name: "test-server".to_string(),
version: "1.0.0".to_string(),
protocol_version: "2024-11-05".to_string(),
},
capabilities: McpCapabilities::default(),
};
let json = serde_json::to_string(&config).unwrap();
let parsed: McpClientConfig = serde_json::from_str(&json).unwrap();
assert_eq!(parsed.server_url, config.server_url);
assert_eq!(parsed.server_info.name, "test-server");
}

View File

@@ -616,7 +616,7 @@ struct GeminiResponseContent {
#[serde(default)]
parts: Vec<GeminiResponsePart>,
#[serde(default)]
#[allow(dead_code)]
#[allow(dead_code)] // @reserved: deserialized from Gemini API, not accessed in code
role: Option<String>,
}
@@ -643,7 +643,7 @@ struct GeminiUsageMetadata {
#[serde(default)]
candidates_token_count: Option<u32>,
#[serde(default)]
#[allow(dead_code)]
#[allow(dead_code)] // @reserved: deserialized from Gemini API, not accessed in code
total_token_count: Option<u32>,
}

View File

@@ -12,11 +12,12 @@
use std::sync::Arc;
use zclaw_growth::{
GrowthTracker, InjectionFormat, LlmDriverForExtraction,
MemoryExtractor, MemoryRetriever, PromptInjector, RetrievalResult,
VikingAdapter,
AggregatedPattern, CombinedExtraction, EvolutionConfig, EvolutionEngine,
ExperienceExtractor, ExperienceStore, GrowthTracker, InjectionFormat,
LlmDriverForExtraction, MemoryExtractor, MemoryRetriever, PromptInjector,
RetrievalResult, UserProfileUpdater, VikingAdapter,
};
use zclaw_memory::{ExtractedFactBatch, Fact, FactCategory};
use zclaw_memory::{ExtractedFactBatch, Fact, FactCategory, UserProfileStore};
use zclaw_types::{AgentId, Message, Result, SessionId};
/// Growth system integration for AgentLoop
@@ -32,6 +33,14 @@ pub struct GrowthIntegration {
injector: PromptInjector,
/// Growth tracker for tracking growth metrics
tracker: GrowthTracker,
/// Experience extractor for structured experience persistence
experience_extractor: ExperienceExtractor,
/// Profile updater for incremental user profile updates
profile_updater: UserProfileUpdater,
/// User profile store (optional, for profile updates)
profile_store: Option<Arc<UserProfileStore>>,
/// Evolution engine for L2 skill generation (optional)
evolution_engine: Option<EvolutionEngine>,
/// Configuration
config: GrowthConfigInner,
}
@@ -69,13 +78,19 @@ impl GrowthIntegration {
let retriever = MemoryRetriever::new(viking.clone());
let injector = PromptInjector::new();
let tracker = GrowthTracker::new(viking);
let tracker = GrowthTracker::new(viking.clone());
let evolution_engine = Some(EvolutionEngine::new(viking.clone()));
Self {
retriever,
extractor,
injector,
tracker,
experience_extractor: ExperienceExtractor::new()
.with_store(Arc::new(ExperienceStore::new(viking))),
profile_updater: UserProfileUpdater::new(),
profile_store: None,
evolution_engine,
config: GrowthConfigInner::default(),
}
}
@@ -102,11 +117,73 @@ impl GrowthIntegration {
self.config.enabled
}
/// 启动时初始化:从持久化存储恢复进化引擎的信任度记录
///
/// **注意**FeedbackCollector 内部已实现 lazy-load首次 save() 时自动加载),
/// 所以此方法为可选优化 — 提前加载可避免首次反馈提交时的延迟。
pub async fn initialize(&self) -> Result<()> {
if let Some(ref engine) = self.evolution_engine {
match engine.load_feedback().await {
Ok(count) => {
if count > 0 {
tracing::info!(
"[GrowthIntegration] Loaded {} trust records from storage",
count
);
}
}
Err(e) => {
tracing::warn!(
"[GrowthIntegration] Failed to load trust records: {}",
e
);
}
}
}
Ok(())
}
/// Enable or disable auto extraction
pub fn set_auto_extract(&mut self, auto_extract: bool) {
self.config.auto_extract = auto_extract;
}
/// Set the user profile store for incremental profile updates
pub fn with_profile_store(mut self, store: Arc<UserProfileStore>) -> Self {
self.profile_store = Some(store);
self
}
/// Set the evolution engine configuration
pub fn with_evolution_config(self, config: EvolutionConfig) -> Self {
let engine = self.evolution_engine.unwrap_or_else(|| {
EvolutionEngine::new(Arc::new(VikingAdapter::in_memory()))
});
Self {
evolution_engine: Some(engine.with_config(config)),
..self
}
}
/// Enable or disable the evolution engine
pub fn set_evolution_enabled(&mut self, enabled: bool) {
if let Some(ref mut engine) = self.evolution_engine {
engine.set_enabled(enabled);
}
}
/// L2 检查:是否有可进化的模式
/// 在 extract_combined 之后调用,返回可固化的经验模式列表
pub async fn check_evolution(
&self,
agent_id: &AgentId,
) -> Result<Vec<AggregatedPattern>> {
match &self.evolution_engine {
Some(engine) => engine.check_evolvable_patterns(&agent_id.to_string()).await,
None => Ok(Vec::new()),
}
}
/// Enhance system prompt with retrieved memories
///
/// This method:
@@ -213,8 +290,8 @@ impl GrowthIntegration {
Ok(count)
}
/// Combined extraction: single LLM call that produces both stored memories
/// and structured facts, avoiding double extraction overhead.
/// Combined extraction: single LLM call that produces stored memories,
/// structured experiences, and profile signals — all in one pass.
///
/// Returns `(memory_count, Option<ExtractedFactBatch>)` on success.
pub async fn extract_combined(
@@ -227,25 +304,28 @@ impl GrowthIntegration {
return Ok(None);
}
// Single LLM extraction call
let extracted = self
// 单次 LLM 提取memories + experiences + profile_signals
let combined = self
.extractor
.extract(messages, session_id.clone())
.extract_combined(messages, session_id.clone())
.await
.unwrap_or_else(|e| {
tracing::warn!("[GrowthIntegration] Combined extraction failed: {}", e);
Vec::new()
CombinedExtraction::default()
});
if extracted.is_empty() {
if combined.memories.is_empty()
&& combined.experiences.is_empty()
&& !combined.profile_signals.has_any_signal()
{
return Ok(None);
}
let mem_count = extracted.len();
let mem_count = combined.memories.len();
// Store raw memories
self.extractor
.store_memories(&agent_id.to_string(), &extracted)
.store_memories(&agent_id.to_string(), &combined.memories)
.await?;
// Track learning event
@@ -253,8 +333,71 @@ impl GrowthIntegration {
.record_learning(agent_id, &session_id.to_string(), mem_count)
.await?;
// Convert same extracted memories to structured facts (no extra LLM call)
let facts: Vec<Fact> = extracted
// Persist structured experiences (L1 enhancement)
if let Ok(exp_count) = self
.experience_extractor
.persist_experiences(&agent_id.to_string(), &combined)
.await
{
if exp_count > 0 {
tracing::debug!(
"[GrowthIntegration] Persisted {} structured experiences",
exp_count
);
}
}
// Update user profile from extraction signals (L1 enhancement)
if let Some(profile_store) = &self.profile_store {
let updates = self.profile_updater.collect_updates(&combined);
let user_id = agent_id.to_string();
for update in updates {
let result = match update.kind {
zclaw_growth::ProfileUpdateKind::SetField => {
profile_store
.update_field(&user_id, &update.field, &update.value)
.await
}
zclaw_growth::ProfileUpdateKind::AppendArray => {
match update.field.as_str() {
"recent_topic" => {
profile_store
.add_recent_topic(&user_id, &update.value, 10)
.await
}
"pain_point" => {
profile_store
.add_pain_point(&user_id, &update.value, 10)
.await
}
"preferred_tool" => {
profile_store
.add_preferred_tool(&user_id, &update.value, 10)
.await
}
_ => {
tracing::warn!(
"[GrowthIntegration] Unknown array field: {}",
update.field
);
Ok(())
}
}
}
};
if let Err(e) = result {
tracing::warn!(
"[GrowthIntegration] Profile update failed for {}: {}",
update.field,
e
);
}
}
}
// Convert extracted memories to structured facts
let facts: Vec<Fact> = combined
.memories
.into_iter()
.map(|m| {
let category = match m.memory_type {

View File

@@ -1,7 +1,6 @@
//! Agent loop implementation
use std::sync::Arc;
use std::sync::Mutex;
use futures::StreamExt;
use tokio::sync::mpsc;
use zclaw_types::{AgentId, SessionId, Message, Result};
@@ -10,7 +9,6 @@ use crate::driver::{LlmDriver, CompletionRequest, ContentBlock};
use crate::stream::StreamChunk;
use crate::tool::{ToolRegistry, ToolContext, SkillExecutor};
use crate::tool::builtin::PathValidator;
use crate::loop_guard::{LoopGuard, LoopGuardResult};
use crate::growth::GrowthIntegration;
use crate::compaction::{self, CompactionConfig};
use crate::middleware::{self, MiddlewareChain};
@@ -23,7 +21,6 @@ pub struct AgentLoop {
driver: Arc<dyn LlmDriver>,
tools: ToolRegistry,
memory: Arc<MemoryStore>,
loop_guard: Mutex<LoopGuard>,
model: String,
system_prompt: Option<String>,
/// Custom agent personality for prompt assembly
@@ -38,10 +35,9 @@ pub struct AgentLoop {
compaction_threshold: usize,
/// Compaction behavior configuration
compaction_config: CompactionConfig,
/// Optional middleware chain — when `Some`, cross-cutting logic is
/// delegated to the chain instead of the inline code below.
/// When `None`, the legacy inline path is used (100% backward compatible).
middleware_chain: Option<MiddlewareChain>,
/// Middleware chain — cross-cutting concerns are delegated to the chain.
/// An empty chain (Default) is a no-op: all `run_*` methods return Continue/Allow.
middleware_chain: MiddlewareChain,
/// Chat mode: extended thinking enabled
thinking_enabled: bool,
/// Chat mode: reasoning effort level
@@ -62,7 +58,6 @@ impl AgentLoop {
driver,
tools,
memory,
loop_guard: Mutex::new(LoopGuard::default()),
model: String::new(), // Must be set via with_model()
system_prompt: None,
soul: None,
@@ -73,7 +68,7 @@ impl AgentLoop {
growth: None,
compaction_threshold: 0,
compaction_config: CompactionConfig::default(),
middleware_chain: None,
middleware_chain: MiddlewareChain::default(),
thinking_enabled: false,
reasoning_effort: None,
plan_mode: false,
@@ -167,11 +162,10 @@ impl AgentLoop {
self
}
/// Inject a middleware chain. When set, cross-cutting concerns (compaction,
/// loop guard, token calibration, etc.) are delegated to the chain instead
/// of the inline logic.
/// Inject a middleware chain. Cross-cutting concerns (compaction,
/// loop guard, token calibration, etc.) are delegated to the chain.
pub fn with_middleware_chain(mut self, chain: MiddlewareChain) -> Self {
self.middleware_chain = Some(chain);
self.middleware_chain = chain;
self
}
@@ -227,31 +221,7 @@ impl AgentLoop {
// Get all messages for context
let mut messages = self.memory.get_messages(&session_id).await?;
let use_middleware = self.middleware_chain.is_some();
// Apply compaction — skip inline path when middleware chain handles it
if !use_middleware && self.compaction_threshold > 0 {
let needs_async =
self.compaction_config.use_llm || self.compaction_config.memory_flush_enabled;
if needs_async {
let outcome = compaction::maybe_compact_with_config(
messages,
self.compaction_threshold,
&self.compaction_config,
&self.agent_id,
&session_id,
Some(&self.driver),
self.growth.as_ref(),
)
.await;
messages = outcome.messages;
} else {
messages = compaction::maybe_compact(messages, self.compaction_threshold);
}
}
// Enhance system prompt — skip when middleware chain handles it
let mut enhanced_prompt = if use_middleware {
// Enhance system prompt via PromptBuilder (middleware may further modify)
let prompt_ctx = PromptContext {
base_prompt: self.system_prompt.clone(),
soul: self.soul.clone(),
@@ -260,16 +230,10 @@ impl AgentLoop {
tool_definitions: self.tools.definitions(),
agent_name: None,
};
PromptBuilder::new().build(&prompt_ctx)
} else if let Some(ref growth) = self.growth {
let base = self.system_prompt.as_deref().unwrap_or("");
growth.enhance_prompt(&self.agent_id, base, &input).await?
} else {
self.system_prompt.clone().unwrap_or_default()
};
let mut enhanced_prompt = PromptBuilder::new().build(&prompt_ctx);
// Run middleware before_completion hooks (compaction, memory inject, etc.)
if let Some(ref chain) = self.middleware_chain {
{
let mut mw_ctx = middleware::MiddlewareContext {
agent_id: self.agent_id.clone(),
session_id: session_id.clone(),
@@ -280,7 +244,7 @@ impl AgentLoop {
input_tokens: 0,
output_tokens: 0,
};
match chain.run_before_completion(&mut mw_ctx).await? {
match self.middleware_chain.run_before_completion(&mut mw_ctx).await? {
middleware::MiddlewareDecision::Continue => {
messages = mw_ctx.messages;
enhanced_prompt = mw_ctx.system_prompt;
@@ -400,7 +364,6 @@ impl AgentLoop {
// Create tool context and execute all tools
let tool_context = self.create_tool_context(session_id.clone());
let mut circuit_breaker_triggered = false;
let mut abort_result: Option<AgentLoopResult> = None;
let mut clarification_result: Option<AgentLoopResult> = None;
for (id, name, input) in tool_calls {
@@ -408,8 +371,8 @@ impl AgentLoop {
if abort_result.is_some() {
break;
}
// Check tool call safety — via middleware chain or inline loop guard
if let Some(ref chain) = self.middleware_chain {
// Check tool call safety — via middleware chain
{
let mw_ctx_ref = middleware::MiddlewareContext {
agent_id: self.agent_id.clone(),
session_id: session_id.clone(),
@@ -420,7 +383,7 @@ impl AgentLoop {
input_tokens: total_input_tokens,
output_tokens: total_output_tokens,
};
match chain.run_before_tool_call(&mw_ctx_ref, &name, &input).await? {
match self.middleware_chain.run_before_tool_call(&mw_ctx_ref, &name, &input).await? {
middleware::ToolCallDecision::Allow => {}
middleware::ToolCallDecision::Block(msg) => {
tracing::warn!("[AgentLoop] Tool '{}' blocked by middleware: {}", name, msg);
@@ -456,26 +419,6 @@ impl AgentLoop {
});
}
}
} else {
// Legacy inline path
let guard_result = self.loop_guard.lock().unwrap_or_else(|e| e.into_inner()).check(&name, &input);
match guard_result {
LoopGuardResult::CircuitBreaker => {
tracing::warn!("[AgentLoop] Circuit breaker triggered by tool '{}'", name);
circuit_breaker_triggered = true;
break;
}
LoopGuardResult::Blocked => {
tracing::warn!("[AgentLoop] Tool '{}' blocked by loop guard", name);
let error_output = serde_json::json!({ "error": "工具调用被循环防护拦截" });
messages.push(Message::tool_result(id, zclaw_types::ToolId::new(&name), error_output, true));
continue;
}
LoopGuardResult::Warn => {
tracing::warn!("[AgentLoop] Tool '{}' triggered loop guard warning", name);
}
LoopGuardResult::Allowed => {}
}
}
let tool_result = match tokio::time::timeout(
@@ -537,21 +480,10 @@ impl AgentLoop {
break result;
}
// If circuit breaker was triggered, terminate immediately
if circuit_breaker_triggered {
let msg = "检测到工具调用循环,已自动终止";
self.memory.append_message(&session_id, &Message::assistant(msg)).await?;
break AgentLoopResult {
response: msg.to_string(),
input_tokens: total_input_tokens,
output_tokens: total_output_tokens,
iterations,
};
}
};
// Post-completion processing — middleware chain or inline growth
if let Some(ref chain) = self.middleware_chain {
// Post-completion processing — middleware chain
{
let mw_ctx = middleware::MiddlewareContext {
agent_id: self.agent_id.clone(),
session_id: session_id.clone(),
@@ -562,16 +494,9 @@ impl AgentLoop {
input_tokens: total_input_tokens,
output_tokens: total_output_tokens,
};
if let Err(e) = chain.run_after_completion(&mw_ctx).await {
if let Err(e) = self.middleware_chain.run_after_completion(&mw_ctx).await {
tracing::warn!("[AgentLoop] Middleware after_completion failed: {}", e);
}
} else if let Some(ref growth) = self.growth {
// Legacy inline path
if let Ok(all_messages) = self.memory.get_messages(&session_id).await {
if let Err(e) = growth.process_conversation(&self.agent_id, &all_messages, session_id.clone()).await {
tracing::warn!("[AgentLoop] Growth processing failed: {}", e);
}
}
}
Ok(result)
@@ -593,31 +518,7 @@ impl AgentLoop {
// Get all messages for context
let mut messages = self.memory.get_messages(&session_id).await?;
let use_middleware = self.middleware_chain.is_some();
// Apply compaction — skip inline path when middleware chain handles it
if !use_middleware && self.compaction_threshold > 0 {
let needs_async =
self.compaction_config.use_llm || self.compaction_config.memory_flush_enabled;
if needs_async {
let outcome = compaction::maybe_compact_with_config(
messages,
self.compaction_threshold,
&self.compaction_config,
&self.agent_id,
&session_id,
Some(&self.driver),
self.growth.as_ref(),
)
.await;
messages = outcome.messages;
} else {
messages = compaction::maybe_compact(messages, self.compaction_threshold);
}
}
// Enhance system prompt — skip when middleware chain handles it
let mut enhanced_prompt = if use_middleware {
// Enhance system prompt via PromptBuilder (middleware may further modify)
let prompt_ctx = PromptContext {
base_prompt: self.system_prompt.clone(),
soul: self.soul.clone(),
@@ -626,16 +527,10 @@ impl AgentLoop {
tool_definitions: self.tools.definitions(),
agent_name: None,
};
PromptBuilder::new().build(&prompt_ctx)
} else if let Some(ref growth) = self.growth {
let base = self.system_prompt.as_deref().unwrap_or("");
growth.enhance_prompt(&self.agent_id, base, &input).await?
} else {
self.system_prompt.clone().unwrap_or_default()
};
let mut enhanced_prompt = PromptBuilder::new().build(&prompt_ctx);
// Run middleware before_completion hooks (compaction, memory inject, etc.)
if let Some(ref chain) = self.middleware_chain {
{
let mut mw_ctx = middleware::MiddlewareContext {
agent_id: self.agent_id.clone(),
session_id: session_id.clone(),
@@ -646,18 +541,20 @@ impl AgentLoop {
input_tokens: 0,
output_tokens: 0,
};
match chain.run_before_completion(&mut mw_ctx).await? {
match self.middleware_chain.run_before_completion(&mut mw_ctx).await? {
middleware::MiddlewareDecision::Continue => {
messages = mw_ctx.messages;
enhanced_prompt = mw_ctx.system_prompt;
}
middleware::MiddlewareDecision::Stop(reason) => {
let _ = tx.send(LoopEvent::Complete(AgentLoopResult {
if let Err(e) = tx.send(LoopEvent::Complete(AgentLoopResult {
response: reason,
input_tokens: 0,
output_tokens: 0,
iterations: 1,
})).await;
})).await {
tracing::warn!("[AgentLoop] Failed to send Complete event: {}", e);
}
return Ok(rx);
}
}
@@ -668,7 +565,6 @@ impl AgentLoop {
let memory = self.memory.clone();
let driver = self.driver.clone();
let tools = self.tools.clone();
let loop_guard_clone = self.loop_guard.lock().unwrap_or_else(|e| e.into_inner()).clone();
let middleware_chain = self.middleware_chain.clone();
let skill_executor = self.skill_executor.clone();
let path_validator = self.path_validator.clone();
@@ -682,7 +578,6 @@ impl AgentLoop {
tokio::spawn(async move {
let mut messages = messages;
let loop_guard_clone = Mutex::new(loop_guard_clone);
let max_iterations = 10;
let mut iteration = 0;
let mut total_input_tokens = 0u32;
@@ -691,15 +586,19 @@ impl AgentLoop {
'outer: loop {
iteration += 1;
if iteration > max_iterations {
let _ = tx.send(LoopEvent::Error("达到最大迭代次数".to_string())).await;
if let Err(e) = tx.send(LoopEvent::Error("达到最大迭代次数".to_string())).await {
tracing::warn!("[AgentLoop] Failed to send Error event: {}", e);
}
break;
}
// Notify iteration start
let _ = tx.send(LoopEvent::IterationStart {
if let Err(e) = tx.send(LoopEvent::IterationStart {
iteration,
max_iterations,
}).await;
}).await {
tracing::warn!("[AgentLoop] Failed to send IterationStart event: {}", e);
}
// Build completion request
let request = CompletionRequest {
@@ -742,13 +641,17 @@ impl AgentLoop {
text_delta_count += 1;
tracing::debug!("[AgentLoop] TextDelta #{}: {} chars", text_delta_count, delta.len());
iteration_text.push_str(delta);
let _ = tx.send(LoopEvent::Delta(delta.clone())).await;
if let Err(e) = tx.send(LoopEvent::Delta(delta.clone())).await {
tracing::warn!("[AgentLoop] Failed to send Delta event: {}", e);
}
}
StreamChunk::ThinkingDelta { delta } => {
thinking_delta_count += 1;
tracing::debug!("[AgentLoop] ThinkingDelta #{}: {} chars", thinking_delta_count, delta.len());
reasoning_text.push_str(delta);
let _ = tx.send(LoopEvent::ThinkingDelta(delta.clone())).await;
if let Err(e) = tx.send(LoopEvent::ThinkingDelta(delta.clone())).await {
tracing::warn!("[AgentLoop] Failed to send ThinkingDelta event: {}", e);
}
}
StreamChunk::ToolUseStart { id, name } => {
tracing::debug!("[AgentLoop] ToolUseStart: id={}, name={}", id, name);
@@ -770,7 +673,9 @@ impl AgentLoop {
// Update with final parsed input and emit ToolStart event
if let Some(tool) = pending_tool_calls.iter_mut().find(|(tid, _, _)| tid == id) {
tool.2 = input.clone();
let _ = tx.send(LoopEvent::ToolStart { name: tool.1.clone(), input: input.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolStart { name: tool.1.clone(), input: input.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolStart event: {}", e);
}
}
}
StreamChunk::Complete { input_tokens: it, output_tokens: ot, .. } => {
@@ -787,20 +692,26 @@ impl AgentLoop {
}
StreamChunk::Error { message } => {
tracing::error!("[AgentLoop] Stream error: {}", message);
let _ = tx.send(LoopEvent::Error(message.clone())).await;
if let Err(e) = tx.send(LoopEvent::Error(message.clone())).await {
tracing::warn!("[AgentLoop] Failed to send Error event: {}", e);
}
stream_errored = true;
}
}
}
Ok(Some(Err(e))) => {
tracing::error!("[AgentLoop] Chunk error: {}", e);
let _ = tx.send(LoopEvent::Error(format!("LLM 响应错误: {}", e.to_string()))).await;
if let Err(e) = tx.send(LoopEvent::Error(format!("LLM 响应错误: {}", e.to_string()))).await {
tracing::warn!("[AgentLoop] Failed to send Error event: {}", e);
}
stream_errored = true;
}
Ok(None) => break, // Stream ended normally
Err(_) => {
tracing::error!("[AgentLoop] Stream chunk timeout ({}s)", chunk_timeout.as_secs());
let _ = tx.send(LoopEvent::Error("LLM 响应超时,请重试".to_string())).await;
if let Err(e) = tx.send(LoopEvent::Error("LLM 响应超时,请重试".to_string())).await {
tracing::warn!("[AgentLoop] Failed to send Error event: {}", e);
}
stream_errored = true;
}
}
@@ -820,7 +731,9 @@ impl AgentLoop {
if iteration_text.is_empty() && !reasoning_text.is_empty() {
tracing::info!("[AgentLoop] Model generated {} chars of reasoning but no text — using reasoning as response",
reasoning_text.len());
let _ = tx.send(LoopEvent::Delta(reasoning_text.clone())).await;
if let Err(e) = tx.send(LoopEvent::Delta(reasoning_text.clone())).await {
tracing::warn!("[AgentLoop] Failed to send Delta event: {}", e);
}
iteration_text = reasoning_text.clone();
} else if iteration_text.is_empty() {
tracing::warn!("[AgentLoop] No text content after {} chunks (thinking_delta={})",
@@ -838,15 +751,17 @@ impl AgentLoop {
tracing::warn!("[AgentLoop] Failed to save final assistant message: {}", e);
}
let _ = tx.send(LoopEvent::Complete(AgentLoopResult {
if let Err(e) = tx.send(LoopEvent::Complete(AgentLoopResult {
response: iteration_text.clone(),
input_tokens: total_input_tokens,
output_tokens: total_output_tokens,
iterations: iteration,
})).await;
})).await {
tracing::warn!("[AgentLoop] Failed to send Complete event: {}", e);
}
// Post-completion: middleware after_completion (memory extraction, etc.)
if let Some(ref chain) = middleware_chain {
{
let mw_ctx = middleware::MiddlewareContext {
agent_id: agent_id.clone(),
session_id: session_id_clone.clone(),
@@ -857,7 +772,7 @@ impl AgentLoop {
input_tokens: total_input_tokens,
output_tokens: total_output_tokens,
};
if let Err(e) = chain.run_after_completion(&mw_ctx).await {
if let Err(e) = middleware_chain.run_after_completion(&mw_ctx).await {
tracing::warn!("[AgentLoop] Streaming middleware after_completion failed: {}", e);
}
}
@@ -889,8 +804,8 @@ impl AgentLoop {
for (id, name, input) in pending_tool_calls {
tracing::debug!("[AgentLoop] Executing tool: name={}, input={:?}", name, input);
// Check tool call safety — via middleware chain or inline loop guard
if let Some(ref chain) = middleware_chain {
// Check tool call safety — via middleware chain
{
let mw_ctx = middleware::MiddlewareContext {
agent_id: agent_id.clone(),
session_id: session_id_clone.clone(),
@@ -901,18 +816,22 @@ impl AgentLoop {
input_tokens: total_input_tokens,
output_tokens: total_output_tokens,
};
match chain.run_before_tool_call(&mw_ctx, &name, &input).await {
match middleware_chain.run_before_tool_call(&mw_ctx, &name, &input).await {
Ok(middleware::ToolCallDecision::Allow) => {}
Ok(middleware::ToolCallDecision::Block(msg)) => {
tracing::warn!("[AgentLoop] Tool '{}' blocked by middleware: {}", name, msg);
let error_output = serde_json::json!({ "error": msg });
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
messages.push(Message::tool_result(id, zclaw_types::ToolId::new(&name), error_output, true));
continue;
}
Ok(middleware::ToolCallDecision::AbortLoop(reason)) => {
tracing::warn!("[AgentLoop] Loop aborted by middleware: {}", reason);
let _ = tx.send(LoopEvent::Error(reason)).await;
if let Err(e) = tx.send(LoopEvent::Error(reason)).await {
tracing::warn!("[AgentLoop] Failed to send Error event: {}", e);
}
break 'outer;
}
Ok(middleware::ToolCallDecision::ReplaceInput(new_input)) => {
@@ -936,18 +855,24 @@ impl AgentLoop {
let (result, is_error) = if let Some(tool) = tools.get(&name) {
match tool.execute(new_input, &tool_context).await {
Ok(output) => {
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
(output, false)
}
Err(e) => {
let error_output = serde_json::json!({ "error": e.to_string() });
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
(error_output, true)
}
}
} else {
let error_output = serde_json::json!({ "error": format!("Unknown tool: {}", name) });
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
(error_output, true)
};
messages.push(Message::tool_result(id, zclaw_types::ToolId::new(&name), result, is_error));
@@ -956,31 +881,13 @@ impl AgentLoop {
Err(e) => {
tracing::error!("[AgentLoop] Middleware error for tool '{}': {}", name, e);
let error_output = serde_json::json!({ "error": e.to_string() });
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
messages.push(Message::tool_result(id, zclaw_types::ToolId::new(&name), error_output, true));
continue;
}
}
} else {
// Legacy inline loop guard path
let guard_result = loop_guard_clone.lock().unwrap_or_else(|e| e.into_inner()).check(&name, &input);
match guard_result {
LoopGuardResult::CircuitBreaker => {
let _ = tx.send(LoopEvent::Error("检测到工具调用循环,已自动终止".to_string())).await;
break 'outer;
}
LoopGuardResult::Blocked => {
tracing::warn!("[AgentLoop] Tool '{}' blocked by loop guard", name);
let error_output = serde_json::json!({ "error": "工具调用被循环防护拦截" });
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await;
messages.push(Message::tool_result(id, zclaw_types::ToolId::new(&name), error_output, true));
continue;
}
LoopGuardResult::Warn => {
tracing::warn!("[AgentLoop] Tool '{}' triggered loop guard warning", name);
}
LoopGuardResult::Allowed => {}
}
}
// Use pre-resolved path_validator (already has default fallback from create_tool_context logic)
let pv = path_validator.clone().unwrap_or_else(|| {
@@ -1005,20 +912,26 @@ impl AgentLoop {
match tool.execute(input.clone(), &tool_context).await {
Ok(output) => {
tracing::debug!("[AgentLoop] Tool '{}' executed successfully: {:?}", name, output);
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
(output, false)
}
Err(e) => {
tracing::error!("[AgentLoop] Tool '{}' execution failed: {}", name, e);
let error_output = serde_json::json!({ "error": e.to_string() });
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
(error_output, true)
}
}
} else {
tracing::error!("[AgentLoop] Tool '{}' not found in registry", name);
let error_output = serde_json::json!({ "error": format!("Unknown tool: {}", name) });
let _ = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await;
if let Err(e) = tx.send(LoopEvent::ToolEnd { name: name.clone(), output: error_output.clone() }).await {
tracing::warn!("[AgentLoop] Failed to send ToolEnd event: {}", e);
}
(error_output, true)
};
@@ -1038,13 +951,17 @@ impl AgentLoop {
is_error,
));
// Send the question as final delta so the user sees it
let _ = tx.send(LoopEvent::Delta(question.clone())).await;
let _ = tx.send(LoopEvent::Complete(AgentLoopResult {
if let Err(e) = tx.send(LoopEvent::Delta(question.clone())).await {
tracing::warn!("[AgentLoop] Failed to send Delta event: {}", e);
}
if let Err(e) = tx.send(LoopEvent::Complete(AgentLoopResult {
response: question.clone(),
input_tokens: total_input_tokens,
output_tokens: total_output_tokens,
iterations: iteration,
})).await;
})).await {
tracing::warn!("[AgentLoop] Failed to send Complete event: {}", e);
}
if let Err(e) = memory.append_message(&session_id_clone, &Message::assistant(&question)).await {
tracing::warn!("[AgentLoop] Failed to save clarification message: {}", e);
}

View File

@@ -279,3 +279,4 @@ pub mod token_calibration;
pub mod tool_error;
pub mod tool_output_guard;
pub mod trajectory_recorder;
pub mod evolution;

View File

@@ -20,19 +20,19 @@ use super::{AgentMiddleware, MiddlewareContext, MiddlewareDecision};
// ---------------------------------------------------------------------------
static RE_COMPANY: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"[^\s]{1,20}(?:公司|厂|集团|工作室|商行|有限|股份)").unwrap()
Regex::new(r"[^\s]{1,20}(?:公司|厂|集团|工作室|商行|有限|股份)").expect("static regex is valid")
});
static RE_MONEY: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"[¥¥$]\s*[\d,.]+[万亿]?元?|[\d,.]+[万亿]元").unwrap()
Regex::new(r"[¥¥$]\s*[\d,.]+[万亿]?元?|[\d,.]+[万亿]元").expect("static regex is valid")
});
static RE_PHONE: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"1[3-9]\d-?\d{4}-?\d{4}").unwrap()
Regex::new(r"1[3-9]\d-?\d{4}-?\d{4}").expect("static regex is valid")
});
static RE_EMAIL: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}").unwrap()
Regex::new(r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}").expect("static regex is valid")
});
static RE_ID_CARD: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"\b\d{17}[\dXx]\b").unwrap()
Regex::new(r"\b\d{17}[\dXx]\b").expect("static regex is valid")
});
// ---------------------------------------------------------------------------
@@ -130,7 +130,7 @@ impl DataMasker {
fn recover_read<T>(lock: &RwLock<T>) -> std::sync::LockResult<std::sync::RwLockReadGuard<'_, T>> {
match lock.read() {
Ok(guard) => Ok(guard),
Err(e) => {
Err(_e) => {
tracing::warn!("[DataMasker] RwLock poisoned during read, recovering");
// Poison error still gives us access to the inner guard
lock.read()
@@ -141,7 +141,7 @@ impl DataMasker {
fn recover_write<T>(lock: &RwLock<T>) -> std::sync::LockResult<std::sync::RwLockWriteGuard<'_, T>> {
match lock.write() {
Ok(guard) => Ok(guard),
Err(e) => {
Err(_e) => {
tracing::warn!("[DataMasker] RwLock poisoned during write, recovering");
lock.write()
}

View File

@@ -0,0 +1,165 @@
//! 进化引擎中间件
//! 在管家对话中检测并呈现"技能进化确认"提示
//! 优先级 78在 ButlerRouter@80 之前运行)
use async_trait::async_trait;
use std::sync::Arc;
use tokio::sync::RwLock;
use crate::middleware::{
AgentMiddleware, MiddlewareContext, MiddlewareDecision,
};
use zclaw_types::Result;
/// 待确认的进化事件
#[derive(Debug, Clone)]
pub struct PendingEvolution {
pub pattern_name: String,
pub trigger_suggestion: String,
pub description: String,
}
/// 进化引擎中间件
/// 检查是否有待确认的进化事件,注入确认提示到 system prompt
pub struct EvolutionMiddleware {
pending: Arc<RwLock<Vec<PendingEvolution>>>,
}
impl EvolutionMiddleware {
pub fn new() -> Self {
Self {
pending: Arc::new(RwLock::new(Vec::new())),
}
}
/// 添加一个待确认的进化事件
pub async fn add_pending(&self, evolution: PendingEvolution) {
self.pending.write().await.push(evolution);
}
/// 获取并清除所有待确认事件
pub async fn drain_pending(&self) -> Vec<PendingEvolution> {
let mut pending = self.pending.write().await;
std::mem::take(&mut *pending)
}
/// 当前待确认事件数量
pub async fn pending_count(&self) -> usize {
self.pending.read().await.len()
}
}
impl Default for EvolutionMiddleware {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl AgentMiddleware for EvolutionMiddleware {
fn name(&self) -> &str {
"evolution"
}
fn priority(&self) -> i32 {
78 // 在 ButlerRouter(80) 之前
}
async fn before_completion(
&self,
ctx: &mut MiddlewareContext,
) -> Result<MiddlewareDecision> {
// 先用 read lock 快速判空,避免每次对话都获取写锁
if self.pending.read().await.is_empty() {
return Ok(MiddlewareDecision::Continue);
}
// 只移除第一个事件,保留后续事件留待下次注入
let to_inject = {
let mut pending = self.pending.write().await;
if pending.is_empty() {
return Ok(MiddlewareDecision::Continue);
}
pending.remove(0)
};
let injection = format!(
"\n\n<evolution-suggestion>\n\
我注意到你经常做「{pattern}」相关的事情。\n\
我可以帮你整理成一个技能,以后直接说「{trigger}」就能用了。\n\
技能描述:{desc}\n\
如果你同意,请回复 '确认保存技能'。如果你想调整,可以告诉我怎么改。\n\
</evolution-suggestion>",
pattern = to_inject.pattern_name,
trigger = to_inject.trigger_suggestion,
desc = to_inject.description,
);
ctx.system_prompt.push_str(&injection);
tracing::info!(
"[EvolutionMiddleware] Injected evolution suggestion for: {}",
to_inject.pattern_name
);
Ok(MiddlewareDecision::Continue)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_no_pending_continues() {
let mw = EvolutionMiddleware::new();
assert_eq!(mw.pending_count().await, 0);
}
#[tokio::test]
async fn test_add_and_drain() {
let mw = EvolutionMiddleware::new();
mw.add_pending(PendingEvolution {
pattern_name: "报表生成".to_string(),
trigger_suggestion: "生成报表".to_string(),
description: "自动生成每日报表".to_string(),
})
.await;
assert_eq!(mw.pending_count().await, 1);
let drained = mw.drain_pending().await;
assert_eq!(drained.len(), 1);
assert_eq!(drained[0].pattern_name, "报表生成");
assert_eq!(mw.pending_count().await, 0);
}
#[tokio::test]
async fn test_name_and_priority() {
let mw = EvolutionMiddleware::new();
assert_eq!(mw.name(), "evolution");
assert_eq!(mw.priority(), 78);
}
#[tokio::test]
async fn test_only_first_event_injected() {
let mw = EvolutionMiddleware::new();
mw.add_pending(PendingEvolution {
pattern_name: "事件A".to_string(),
trigger_suggestion: "触发A".to_string(),
description: "描述A".to_string(),
})
.await;
mw.add_pending(PendingEvolution {
pattern_name: "事件B".to_string(),
trigger_suggestion: "触发B".to_string(),
description: "描述B".to_string(),
})
.await;
// 模拟注入:用 read 判空 + write 取第一个
let first = {
let mut pending = mw.pending.write().await;
pending.remove(0)
};
assert_eq!(first.pattern_name, "事件A");
assert_eq!(mw.pending_count().await, 1); // 事件B 仍保留
}
}

View File

@@ -11,14 +11,17 @@ use async_trait::async_trait;
use zclaw_types::Result;
use crate::growth::GrowthIntegration;
use crate::middleware::{AgentMiddleware, MiddlewareContext, MiddlewareDecision};
use crate::middleware::evolution::EvolutionMiddleware;
/// Middleware that handles memory retrieval (pre-completion) and extraction (post-completion).
///
/// Wraps `GrowthIntegration` and delegates:
/// - `before_completion` → `enhance_prompt()` for memory injection
/// - `after_completion` → `process_conversation()` for memory extraction
/// - `after_completion` → `extract_combined()` for memory extraction + evolution check
pub struct MemoryMiddleware {
growth: GrowthIntegration,
/// Shared EvolutionMiddleware for pushing evolution suggestions
evolution_mw: Option<std::sync::Arc<EvolutionMiddleware>>,
/// Minimum seconds between extractions for the same agent (debounce).
debounce_secs: u64,
/// Timestamp of last extraction per agent (for debouncing).
@@ -29,11 +32,18 @@ impl MemoryMiddleware {
pub fn new(growth: GrowthIntegration) -> Self {
Self {
growth,
evolution_mw: None,
debounce_secs: 30,
last_extraction: std::sync::Mutex::new(std::collections::HashMap::new()),
}
}
/// Attach a shared EvolutionMiddleware for pushing evolution suggestions.
pub fn with_evolution(mut self, mw: std::sync::Arc<EvolutionMiddleware>) -> Self {
self.evolution_mw = Some(mw);
self
}
/// Set the debounce interval in seconds.
pub fn with_debounce_secs(mut self, secs: u64) -> Self {
self.debounce_secs = secs;
@@ -52,6 +62,49 @@ impl MemoryMiddleware {
map.insert(agent_id.to_string(), now);
true
}
/// Check for evolvable patterns and push suggestions to EvolutionMiddleware.
async fn check_and_push_evolution(&self, agent_id: &zclaw_types::AgentId) {
let evolution_mw = match &self.evolution_mw {
Some(mw) => mw,
None => return,
};
match self.growth.check_evolution(agent_id).await {
Ok(patterns) if !patterns.is_empty() => {
for pattern in &patterns {
let trigger = pattern
.common_steps
.first()
.cloned()
.unwrap_or_else(|| pattern.pain_pattern.clone());
evolution_mw.add_pending(
crate::middleware::evolution::PendingEvolution {
pattern_name: pattern.pain_pattern.clone(),
trigger_suggestion: trigger,
description: format!(
"基于 {} 次重复经验,自动固化技能",
pattern.total_reuse
),
},
).await;
}
tracing::info!(
"[MemoryMiddleware] Pushed {} evolution candidates for agent {}",
patterns.len(),
agent_id
);
}
Ok(_) => {
tracing::debug!("[MemoryMiddleware] No evolvable patterns found");
}
Err(e) => {
tracing::debug!(
"[MemoryMiddleware] Evolution check failed (non-fatal): {}", e
);
}
}
}
}
#[async_trait]
@@ -65,11 +118,6 @@ impl AgentMiddleware for MemoryMiddleware {
ctx.user_input.chars().take(50).collect::<String>()
);
// Retrieve relevant memories and inject into system prompt.
// The SqliteStorage retriever now uses FTS5-only matching — if FTS5 finds
// no relevant results, no memories are returned (no scope-based fallback).
// This prevents irrelevant high-importance memories from leaking into
// unrelated conversations.
let base = &ctx.system_prompt;
match self.growth.enhance_prompt(&ctx.agent_id, base, &ctx.user_input).await {
Ok(enhanced) => {
@@ -88,7 +136,6 @@ impl AgentMiddleware for MemoryMiddleware {
Ok(MiddlewareDecision::Continue)
}
Err(e) => {
// Non-fatal: retrieval failure should not block the conversation
tracing::warn!(
"[MemoryMiddleware] Memory retrieval failed (non-fatal): {}",
e
@@ -99,7 +146,6 @@ impl AgentMiddleware for MemoryMiddleware {
}
async fn after_completion(&self, ctx: &MiddlewareContext) -> Result<()> {
// Debounce: skip extraction if called too recently for this agent
let agent_key = ctx.agent_id.to_string();
if !self.should_extract(&agent_key) {
tracing::debug!(
@@ -113,8 +159,6 @@ impl AgentMiddleware for MemoryMiddleware {
return Ok(());
}
// Combined extraction: single LLM call produces both memories and structured facts.
// Avoids double LLM extraction ( process_conversation + extract_structured_facts).
match self.growth.extract_combined(
&ctx.agent_id,
&ctx.messages,
@@ -127,12 +171,14 @@ impl AgentMiddleware for MemoryMiddleware {
facts.len(),
agent_key
);
// Check for evolvable patterns after successful extraction
self.check_and_push_evolution(&ctx.agent_id).await;
}
Ok(None) => {
tracing::debug!("[MemoryMiddleware] No memories or facts extracted");
}
Err(e) => {
// Non-fatal: extraction failure should not affect the response
tracing::warn!("[MemoryMiddleware] Combined extraction failed: {}", e);
}
}

View File

@@ -11,7 +11,7 @@ use tokio::sync::RwLock;
use zclaw_memory::trajectory_store::{
TrajectoryEvent, TrajectoryStepType, TrajectoryStore,
};
use zclaw_types::{Result, SessionId};
use zclaw_types::Result;
use crate::driver::ContentBlock;
use crate::middleware::{AgentMiddleware, MiddlewareContext, MiddlewareDecision};

View File

@@ -2,12 +2,15 @@
//!
//! Three-layer fallback strategy:
//! 1. Regex pattern matching (covers ~80% of common expressions)
//! 2. LLM-assisted parsing (for ambiguous/complex expressions) — TODO: wire when Haiku driver available
//! 2. LLM-assisted parsing (for ambiguous/complex expressions) — FUTURE: post-release LLM-assisted natural language parsing
//! 3. Interactive clarification (return `Unclear`)
//!
//! Lives in `zclaw-runtime` because it's a pure text→cron utility with no kernel dependency.
use chrono::{Datelike, Timelike};
use std::sync::LazyLock;
use chrono::Timelike;
use regex::Regex;
use serde::{Deserialize, Serialize};
use zclaw_types::AgentId;
@@ -56,20 +59,79 @@ pub enum ScheduleParseResult {
}
// ---------------------------------------------------------------------------
// Regex pattern library
// Pre-compiled regex patterns (LazyLock — compiled once, reused forever)
// ---------------------------------------------------------------------------
/// A single pattern for matching Chinese time expressions.
struct SchedulePattern {
/// Regex pattern string
regex: &'static str,
/// Cron template — use {h} for hour, {m} for minute, {dow} for day-of-week, {dom} for day-of-month
cron_template: &'static str,
/// Human description template
description: &'static str,
/// Base confidence for this pattern
confidence: f32,
}
/// Time-of-day period fragment used across multiple patterns.
const PERIOD: &str = "(凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)?";
// extract_task_description
static RE_TIME_STRIP: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(
r"^(?:凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)?\d{1,2}[点时:]\d{0,2}分?"
).expect("static regex pattern is valid")
});
// try_every_day
static RE_EVERY_DAY_EXACT: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(&format!(
r"(?:每天|每日)(?:的)?{}(\d{{1,2}})[点时:](\d{{1,2}})?",
PERIOD
)).expect("static regex pattern is valid")
});
static RE_EVERY_DAY_PERIOD: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(
r"(?:每天|每日)(?:的)?(凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)"
).expect("static regex pattern is valid")
});
// try_every_week
static RE_EVERY_WEEK: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(&format!(
r"(?:每周|每个?星期|每个?礼拜)(一|二|三|四|五|六|日|天|周一|周二|周三|周四|周五|周六|周日|周天|星期一|星期二|星期三|星期四|星期五|星期六|星期日|星期天|礼拜一|礼拜二|礼拜三|礼拜四|礼拜五|礼拜六|礼拜日|礼拜天)(?:的)?{}(\d{{1,2}})[点时:](\d{{1,2}})?",
PERIOD
)).expect("static regex pattern is valid")
});
// try_workday
static RE_WORKDAY_EXACT: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(&format!(
r"(?:工作日|每个?工作日|工作日(?:的)?){}(\d{{1,2}})[点时:](\d{{1,2}})?",
PERIOD
)).expect("static regex pattern is valid")
});
static RE_WORKDAY_PERIOD: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(
r"(?:工作日|每个?工作日)(?:的)?(凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)"
).expect("static regex pattern is valid")
});
// try_interval
static RE_INTERVAL: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(r"每(\d{1,2})(小时|分钟|分|钟|个小时)").expect("static regex pattern is valid")
});
// try_monthly
static RE_MONTHLY: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(&format!(
r"(?:每月|每个月)(?:的)?(\d{{1,2}})[号日](?:的)?{}(\d{{1,2}})?[点时:]?(\d{{1,2}})?",
PERIOD
)).expect("static regex pattern is valid")
});
// try_one_shot
static RE_ONE_SHOT: LazyLock<Regex> = LazyLock::new(|| {
Regex::new(&format!(
r"(明天|后天|大后天)(?:的)?{}(\d{{1,2}})[点时:](\d{{1,2}})?",
PERIOD
)).expect("static regex pattern is valid")
});
// ---------------------------------------------------------------------------
// Helper lookups (pure functions, no allocation)
// ---------------------------------------------------------------------------
/// Chinese time period keywords → hour mapping
fn period_to_hour(period: &str) -> Option<u32> {
@@ -99,6 +161,23 @@ fn weekday_to_cron(day: &str) -> Option<&'static str> {
}
}
/// Adjust hour based on time-of-day period. Chinese 12-hour convention:
/// 下午3点 = 15, 晚上8点 = 20, etc. Morning hours stay as-is.
fn adjust_hour_for_period(hour: u32, period: Option<&str>) -> u32 {
if let Some(p) = period {
match p {
"下午" | "午后" => { if hour < 12 { hour + 12 } else { hour } }
"晚上" | "晚间" | "夜里" | "夜晚" => { if hour < 12 { hour + 12 } else { hour } }
"傍晚" | "黄昏" => { if hour < 12 { hour + 12 } else { hour } }
"中午" => { if hour == 12 { 12 } else if hour < 12 { hour + 12 } else { hour } }
"半夜" | "午夜" => { if hour == 12 { 0 } else { hour } }
_ => hour,
}
} else {
hour
}
}
// ---------------------------------------------------------------------------
// Parser implementation
// ---------------------------------------------------------------------------
@@ -113,35 +192,23 @@ pub fn parse_nl_schedule(input: &str, default_agent_id: &AgentId) -> SchedulePar
return ScheduleParseResult::Unclear;
}
// Extract task description (everything after keywords like "提醒我", "帮我")
let task_description = extract_task_description(input);
// --- Pattern 1: 每天 + 时间 ---
if let Some(result) = try_every_day(input, &task_description, default_agent_id) {
return result;
}
// --- Pattern 2: 每周N + 时间 ---
if let Some(result) = try_every_week(input, &task_description, default_agent_id) {
return result;
}
// --- Pattern 3: 工作日 + 时间 ---
if let Some(result) = try_workday(input, &task_description, default_agent_id) {
return result;
}
// --- Pattern 4: 每N小时/分钟 ---
if let Some(result) = try_interval(input, &task_description, default_agent_id) {
return result;
}
// --- Pattern 5: 每月N号 ---
if let Some(result) = try_monthly(input, &task_description, default_agent_id) {
return result;
}
// --- Pattern 6: 明天/后天 + 时间 (one-shot) ---
if let Some(result) = try_one_shot(input, &task_description, default_agent_id) {
return result;
}
@@ -160,13 +227,7 @@ fn extract_task_description(input: &str) -> String {
let mut desc = input.to_string();
// Strip prefixes + time expressions in alternating passes until stable
let time_re = regex::Regex::new(
r"^(?:凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)?\d{1,2}[点时:]\d{0,2}分?"
).unwrap_or_else(|_| regex::Regex::new("").unwrap());
for _ in 0..3 {
// Pass 1: strip prefixes
loop {
let mut stripped = false;
for prefix in &strip_prefixes {
@@ -177,8 +238,7 @@ fn extract_task_description(input: &str) -> String {
}
if !stripped { break; }
}
// Pass 2: strip time expressions
let new_desc = time_re.replace(&desc, "").to_string();
let new_desc = RE_TIME_STRIP.replace(&desc, "").to_string();
if new_desc == desc { break; }
desc = new_desc;
}
@@ -186,32 +246,10 @@ fn extract_task_description(input: &str) -> String {
desc.trim().to_string()
}
// -- Pattern matchers --
/// Adjust hour based on time-of-day period. Chinese 12-hour convention:
/// 下午3点 = 15, 晚上8点 = 20, etc. Morning hours stay as-is.
fn adjust_hour_for_period(hour: u32, period: Option<&str>) -> u32 {
if let Some(p) = period {
match p {
"下午" | "午后" => { if hour < 12 { hour + 12 } else { hour } }
"晚上" | "晚间" | "夜里" | "夜晚" => { if hour < 12 { hour + 12 } else { hour } }
"傍晚" | "黄昏" => { if hour < 12 { hour + 12 } else { hour } }
"中午" => { if hour == 12 { 12 } else if hour < 12 { hour + 12 } else { hour } }
"半夜" | "午夜" => { if hour == 12 { 0 } else { hour } }
_ => hour,
}
} else {
hour
}
}
const PERIOD_PATTERN: &str = "(凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)?";
// -- Pattern matchers (all use pre-compiled statics) --
fn try_every_day(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<ScheduleParseResult> {
let re = regex::Regex::new(
&format!(r"(?:每天|每日)(?:的)?{}(\d{{1,2}})[点时:](\d{{1,2}})?", PERIOD_PATTERN)
).ok()?;
if let Some(caps) = re.captures(input) {
if let Some(caps) = RE_EVERY_DAY_EXACT.captures(input) {
let period = caps.get(1).map(|m| m.as_str());
let raw_hour: u32 = caps.get(2)?.as_str().parse().ok()?;
let minute: u32 = caps.get(3).map(|m| m.as_str().parse().unwrap_or(0)).unwrap_or(0);
@@ -228,9 +266,7 @@ fn try_every_day(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<Sch
}));
}
// "每天早上/下午..." without explicit hour
let re2 = regex::Regex::new(r"(?:每天|每日)(?:的)?(凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)").ok()?;
if let Some(caps) = re2.captures(input) {
if let Some(caps) = RE_EVERY_DAY_PERIOD.captures(input) {
let period = caps.get(1)?.as_str();
if let Some(hour) = period_to_hour(period) {
return Some(ScheduleParseResult::Exact(ParsedSchedule {
@@ -247,11 +283,7 @@ fn try_every_day(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<Sch
}
fn try_every_week(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<ScheduleParseResult> {
let re = regex::Regex::new(
&format!(r"(?:每周|每个?星期|每个?礼拜)(一|二|三|四|五|六|日|天|周一|周二|周三|周四|周五|周六|周日|周天|星期一|星期二|星期三|星期四|星期五|星期六|星期日|星期天|礼拜一|礼拜二|礼拜三|礼拜四|礼拜五|礼拜六|礼拜日|礼拜天)(?:的)?{}(\d{{1,2}})[点时:](\d{{1,2}})?", PERIOD_PATTERN)
).ok()?;
let caps = re.captures(input)?;
let caps = RE_EVERY_WEEK.captures(input)?;
let day_str = caps.get(1)?.as_str();
let dow = weekday_to_cron(day_str)?;
let period = caps.get(2).map(|m| m.as_str());
@@ -272,11 +304,7 @@ fn try_every_week(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<Sc
}
fn try_workday(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<ScheduleParseResult> {
let re = regex::Regex::new(
&format!(r"(?:工作日|每个?工作日|工作日(?:的)?){}(\d{{1,2}})[点时:](\d{{1,2}})?", PERIOD_PATTERN)
).ok()?;
if let Some(caps) = re.captures(input) {
if let Some(caps) = RE_WORKDAY_EXACT.captures(input) {
let period = caps.get(1).map(|m| m.as_str());
let raw_hour: u32 = caps.get(2)?.as_str().parse().ok()?;
let minute: u32 = caps.get(3).map(|m| m.as_str().parse().unwrap_or(0)).unwrap_or(0);
@@ -293,11 +321,7 @@ fn try_workday(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<Sched
}));
}
// "工作日下午3点" style
let re2 = regex::Regex::new(
r"(?:工作日|每个?工作日)(?:的)?(凌晨|早上|早晨|上午|中午|下午|午后|傍晚|黄昏|晚上|晚间|夜里|夜晚|半夜|午夜)"
).ok()?;
if let Some(caps) = re2.captures(input) {
if let Some(caps) = RE_WORKDAY_PERIOD.captures(input) {
let period = caps.get(1)?.as_str();
if let Some(hour) = period_to_hour(period) {
return Some(ScheduleParseResult::Exact(ParsedSchedule {
@@ -314,9 +338,7 @@ fn try_workday(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<Sched
}
fn try_interval(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<ScheduleParseResult> {
// "每2小时", "每30分钟", "每N小时/分钟"
let re = regex::Regex::new(r"每(\d{1,2})(小时|分钟|分|钟|个小时)").ok()?;
if let Some(caps) = re.captures(input) {
if let Some(caps) = RE_INTERVAL.captures(input) {
let n: u32 = caps.get(1)?.as_str().parse().ok()?;
if n == 0 {
return None;
@@ -340,11 +362,7 @@ fn try_interval(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<Sche
}
fn try_monthly(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<ScheduleParseResult> {
let re = regex::Regex::new(
&format!(r"(?:每月|每个月)(?:的)?(\d{{1,2}})[号日](?:的)?{}(\d{{1,2}})?[点时:]?(\d{{1,2}})?", PERIOD_PATTERN)
).ok()?;
if let Some(caps) = re.captures(input) {
if let Some(caps) = RE_MONTHLY.captures(input) {
let day: u32 = caps.get(1)?.as_str().parse().ok()?;
let period = caps.get(2).map(|m| m.as_str());
let raw_hour: u32 = caps.get(3).map(|m| m.as_str().parse().unwrap_or(9)).unwrap_or(9);
@@ -366,11 +384,7 @@ fn try_monthly(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<Sched
}
fn try_one_shot(input: &str, task_desc: &str, agent_id: &AgentId) -> Option<ScheduleParseResult> {
let re = regex::Regex::new(
&format!(r"(明天|后天|大后天)(?:的)?{}(\d{{1,2}})[点时:](\d{{1,2}})?", PERIOD_PATTERN)
).ok()?;
let caps = re.captures(input)?;
let caps = RE_ONE_SHOT.captures(input)?;
let day_offset = match caps.get(1)?.as_str() {
"明天" => 1,
"后天" => 2,

View File

@@ -112,12 +112,14 @@ impl Tool for TaskTool {
let task_id = sub_agent_id.to_string();
if let Some(ref tx) = context.event_sender {
let _ = tx.send(LoopEvent::SubtaskStatus {
if tx.send(LoopEvent::SubtaskStatus {
task_id: task_id.clone(),
description: description.to_string(),
status: "started".to_string(),
detail: None,
}).await;
}).await.is_err() {
tracing::debug!("[TaskTool] Subtask status dropped: parent loop ended");
}
}
// Create a fresh session for the sub-agent
@@ -161,12 +163,14 @@ impl Tool for TaskTool {
// Emit subtask_running event
if let Some(ref tx) = context.event_sender {
let _ = tx.send(LoopEvent::SubtaskStatus {
if tx.send(LoopEvent::SubtaskStatus {
task_id: task_id.clone(),
description: description.to_string(),
status: "running".to_string(),
detail: Some("子Agent正在执行中...".to_string()),
}).await;
}).await.is_err() {
tracing::debug!("[TaskTool] Subtask status dropped: parent loop ended");
}
}
// Execute the sub-agent loop (non-streaming — collect full result)
@@ -179,7 +183,7 @@ impl Tool for TaskTool {
// Emit subtask_completed event
if let Some(ref tx) = context.event_sender {
let _ = tx.send(LoopEvent::SubtaskStatus {
if tx.send(LoopEvent::SubtaskStatus {
task_id: task_id.clone(),
description: description.to_string(),
status: "completed".to_string(),
@@ -187,7 +191,9 @@ impl Tool for TaskTool {
"完成 ({}次迭代, {}输入token)",
loop_result.iterations, loop_result.input_tokens
)),
}).await;
}).await.is_err() {
tracing::debug!("[TaskTool] Subtask status dropped: parent loop ended");
}
}
json!({
@@ -204,12 +210,14 @@ impl Tool for TaskTool {
// Emit subtask_failed event
if let Some(ref tx) = context.event_sender {
let _ = tx.send(LoopEvent::SubtaskStatus {
if tx.send(LoopEvent::SubtaskStatus {
task_id: task_id.clone(),
description: description.to_string(),
status: "failed".to_string(),
detail: Some(e.to_string()),
}).await;
}).await.is_err() {
tracing::debug!("[TaskTool] Subtask status dropped: parent loop ended");
}
}
json!({

View File

@@ -1,3 +1,7 @@
-- NOTE: DEPRECATED — These tables are defined but NOT consumed by any Rust code.
-- Kept for schema compatibility. Will be removed in a future cleanup pass.
-- See: V13 audit FIX-04
-- Webhook subscriptions: external endpoints that receive event notifications
CREATE TABLE IF NOT EXISTS webhook_subscriptions (
id TEXT PRIMARY KEY,
@@ -26,3 +30,10 @@ CREATE TABLE IF NOT EXISTS webhook_deliveries (
CREATE INDEX IF NOT EXISTS idx_webhook_subscriptions_account ON webhook_subscriptions(account_id);
CREATE INDEX IF NOT EXISTS idx_webhook_subscriptions_events ON webhook_subscriptions USING gin(events);
CREATE INDEX IF NOT EXISTS idx_webhook_deliveries_pending ON webhook_deliveries(subscription_id) WHERE delivered_at IS NULL;
-- === DOWN MIGRATION ===
-- DROP INDEX IF EXISTS idx_webhook_deliveries_pending;
-- DROP INDEX IF EXISTS idx_webhook_subscriptions_events;
-- DROP INDEX IF EXISTS idx_webhook_subscriptions_account;
-- DROP TABLE IF EXISTS webhook_deliveries;
-- DROP TABLE IF EXISTS webhook_subscriptions;

View File

@@ -0,0 +1,11 @@
-- Add missing indexes for performance-critical queries
-- 2026-04-18 Release readiness audit
-- Rate limit events cleanup (DELETE WHERE created_at < ...)
CREATE INDEX IF NOT EXISTS idx_rle_created_at ON rate_limit_events(created_at);
-- Billing subscriptions plan lookup
CREATE INDEX IF NOT EXISTS idx_billing_sub_plan ON billing_subscriptions(plan_id);
-- Knowledge items created_by lookup
CREATE INDEX IF NOT EXISTS idx_ki_created_by ON knowledge_items(created_by);

View File

@@ -16,8 +16,13 @@ pub fn routes() -> axum::Router<crate::state::AppState> {
.route("/api/v1/tokens", post(handlers::create_token))
.route("/api/v1/tokens/:id", delete(handlers::revoke_token))
.route("/api/v1/logs/operations", get(handlers::list_operation_logs))
.route("/api/v1/stats/dashboard", get(handlers::dashboard_stats))
.route("/api/v1/devices", get(handlers::list_devices))
.route("/api/v1/devices/register", post(handlers::register_device))
.route("/api/v1/devices/heartbeat", post(handlers::device_heartbeat))
}
/// Admin-only 路由 (需 admin_guard_middleware 保护)
pub fn admin_routes() -> axum::Router<crate::state::AppState> {
axum::Router::new()
.route("/api/v1/admin/dashboard", get(handlers::dashboard_stats))
}

View File

@@ -215,7 +215,10 @@ pub async fn login(
.bind(&r.id)
.fetch_one(&state.db)
.await
.unwrap_or(false);
.map_err(|e| {
tracing::warn!(account_id = %r.id, error = %e, "Lockout check query failed");
SaasError::Internal("账号状态检查失败,请重试".into())
})?;
if is_locked {
return Err(SaasError::AuthError("账号已被临时锁定,请稍后再试".into()));
@@ -562,7 +565,7 @@ async fn store_refresh_token(
/// 清理过期和已使用的 refresh tokens
/// 注意: 现已迁移到 Worker/Scheduler 定期执行,此函数保留作为备用
#[allow(dead_code)]
#[allow(dead_code)] // @reserved: backup for Worker/Scheduler cleanup; kept as fallback
async fn cleanup_expired_refresh_tokens(db: &sqlx::PgPool) -> SaasResult<()> {
let now = chrono::Utc::now();
// 删除过期超过 30 天的已使用 token (减少 DB 膨胀)
@@ -631,5 +634,32 @@ pub async fn logout(
}
}
// Fallback: 如果没有找到 refresh token尝试从 access token cookie 提取 account_id
// Tauri 桌面端使用 Bearer auth 时logout body 可能不含 refresh_token
if tokens_to_check.is_empty() {
if let Some(access_cookie) = jar.get(ACCESS_TOKEN_COOKIE) {
let access_val = access_cookie.value().to_string();
if let Ok(claims) = verify_token_skip_expiry(&access_val, jwt_secret) {
let now = chrono::Utc::now();
let result = sqlx::query(
"UPDATE refresh_tokens SET used_at = $1 WHERE account_id = $2 AND used_at IS NULL"
)
.bind(&now)
.bind(&claims.sub)
.execute(&state.db)
.await;
match result {
Ok(r) => {
tracing::info!(account_id = %claims.sub, n = r.rows_affected(), "Refresh tokens revoked via access token fallback");
}
Err(e) => {
tracing::warn!(account_id = %claims.sub, error = %e, "Failed to revoke refresh tokens (access fallback)");
}
}
}
}
}
(clear_auth_cookies(jar), axum::http::StatusCode::NO_CONTENT)
}

View File

@@ -203,6 +203,27 @@ pub async fn auth_middleware(
}
}
/// Admin 路由守卫中间件: 确保 AuthContext 具有 admin/super_admin 角色
/// 必须在 auth_middleware 之后使用(依赖 Extension<AuthContext>
pub async fn admin_guard_middleware(
mut req: Request,
next: Next,
) -> Response {
use crate::auth::handlers::check_permission;
let ctx = req.extensions().get::<AuthContext>().cloned();
match ctx {
Some(ctx) => {
if let Err(e) = check_permission(&ctx, "account:admin") {
e.into_response()
} else {
next.run(req).await
}
}
None => SaasError::Unauthorized.into_response(),
}
}
/// 路由 (无需认证的端点)
pub fn routes() -> axum::Router<AppState> {
use axum::routing::post;

View File

@@ -7,6 +7,7 @@ use axum::{
use serde::Deserialize;
use crate::auth::types::AuthContext;
use crate::auth::handlers::{log_operation, check_permission};
use crate::error::{SaasError, SaasResult};
use crate::state::AppState;
use super::service;
@@ -39,9 +40,23 @@ pub async fn get_subscription(
let sub = service::get_active_subscription(&state.db, &ctx.account_id).await?;
let usage = service::get_or_create_usage(&state.db, &ctx.account_id).await?;
// P2-14 修复: super_admin 无订阅时合成一个 "active" subscription
let sub_value = if sub.is_none() && ctx.role == "super_admin" {
Some(serde_json::json!({
"id": format!("sub-admin-{}", &ctx.account_id.chars().take(8).collect::<String>()),
"account_id": ctx.account_id,
"plan_id": plan.id,
"status": "active",
"current_period_start": usage.period_start,
"current_period_end": usage.period_end,
}))
} else {
sub.map(|s| serde_json::to_value(s).unwrap_or_default())
};
Ok(Json(serde_json::json!({
"plan": plan,
"subscription": sub,
"subscription": sub_value,
"usage": usage,
})))
}
@@ -101,6 +116,41 @@ pub async fn increment_usage_dimension(
})))
}
/// POST /api/v1/billing/payments — 创建支付订单
/// PUT /api/v1/admin/accounts/:id/subscription — 管理员切换用户订阅计划(仅 super_admin
pub async fn admin_switch_subscription(
State(state): State<AppState>,
Extension(ctx): Extension<AuthContext>,
Path(account_id): Path<String>,
Json(req): Json<AdminSwitchPlanRequest>,
) -> SaasResult<Json<serde_json::Value>> {
// 仅 super_admin 可操作
check_permission(&ctx, "admin:full")?;
// 验证 plan_id 非空
if req.plan_id.trim().is_empty() {
return Err(SaasError::InvalidInput("plan_id 不能为空".into()));
}
let sub = service::admin_switch_plan(&state.db, &account_id, &req.plan_id).await?;
log_operation(
&state.db,
&ctx.account_id,
"billing.admin_switch_plan",
"account",
&account_id,
Some(serde_json::json!({ "plan_id": req.plan_id })),
None,
).await.ok(); // 日志失败不影响主流程
Ok(Json(serde_json::json!({
"success": true,
"subscription": sub,
})))
}
/// POST /api/v1/billing/payments — 创建支付订单
pub async fn create_payment(
State(state): State<AppState>,
@@ -537,7 +587,7 @@ pub async fn get_invoice_pdf(
})?;
// 返回 PDF 响应
Ok(axum::response::Response::builder()
axum::response::Response::builder()
.status(200)
.header("Content-Type", "application/pdf")
.header(
@@ -545,5 +595,8 @@ pub async fn get_invoice_pdf(
format!("attachment; filename=\"invoice-{}.pdf\"", invoice.id),
)
.body(axum::body::Body::from(bytes))
.unwrap())
.map_err(|e| {
tracing::error!("Failed to build PDF response: {}", e);
SaasError::Internal("PDF 响应构建失败".into())
})
}

View File

@@ -6,7 +6,7 @@ pub mod handlers;
pub mod payment;
pub mod invoice_pdf;
use axum::routing::{get, post};
use axum::routing::{get, post, put};
/// 全部计费路由(用于 main.rs 一次性挂载)
pub fn routes() -> axum::Router<crate::state::AppState> {
@@ -51,3 +51,9 @@ pub fn mock_routes() -> axum::Router<crate::state::AppState> {
.route("/api/v1/billing/mock-pay", get(handlers::mock_pay_page))
.route("/api/v1/billing/mock-pay/confirm", post(handlers::mock_pay_confirm))
}
/// 管理员计费路由(需 super_admin 权限)
pub fn admin_routes() -> axum::Router<crate::state::AppState> {
axum::Router::new()
.route("/api/v1/admin/accounts/:id/subscription", put(handlers::admin_switch_subscription))
}

View File

@@ -101,6 +101,7 @@ pub async fn create_payment(
Ok(PaymentResult {
payment_id,
invoice_id,
trade_no,
pay_url,
amount_cents: plan.price_cents,
@@ -272,8 +273,8 @@ pub async fn query_payment_status(
payment_id: &str,
account_id: &str,
) -> SaasResult<serde_json::Value> {
let payment: (String, String, i32, String, String) = sqlx::query_as::<_, (String, String, i32, String, String)>(
"SELECT id, method, amount_cents, currency, status \
let payment: (String, String, String, i32, String, String) = sqlx::query_as::<_, (String, String, String, i32, String, String)>(
"SELECT id, invoice_id, method, amount_cents, currency, status \
FROM billing_payments WHERE id = $1 AND account_id = $2"
)
.bind(payment_id)
@@ -282,9 +283,10 @@ pub async fn query_payment_status(
.await?
.ok_or_else(|| SaasError::NotFound("支付记录不存在".into()))?;
let (id, method, amount, currency, status) = payment;
let (id, invoice_id, method, amount, currency, status) = payment;
Ok(serde_json::json!({
"id": id,
"invoice_id": invoice_id,
"method": method,
"amount_cents": amount,
"currency": currency,

View File

@@ -114,7 +114,26 @@ pub async fn get_or_create_usage(pool: &PgPool, account_id: &str) -> SaasResult<
.await?;
if let Some(usage) = existing {
return Ok(usage);
// P1-07 修复: 同步当前计划限额到 max_* 列(防止计划变更后数据不一致)
let plan = get_account_plan(pool, account_id).await?;
let limits: PlanLimits = serde_json::from_value(plan.limits.clone())
.unwrap_or_else(|_| PlanLimits::free());
sqlx::query(
"UPDATE billing_usage_quotas SET max_input_tokens=$2, max_output_tokens=$3, \
max_relay_requests=$4, max_hand_executions=$5, max_pipeline_runs=$6, updated_at=NOW() \
WHERE id=$1"
)
.bind(&usage.id)
.bind(limits.max_input_tokens_monthly)
.bind(limits.max_output_tokens_monthly)
.bind(limits.max_relay_requests_monthly)
.bind(limits.max_hand_executions_monthly)
.bind(limits.max_pipeline_runs_monthly)
.execute(pool).await?;
let updated = sqlx::query_as::<_, UsageQuota>(
"SELECT * FROM billing_usage_quotas WHERE id = $1"
).bind(&usage.id).fetch_one(pool).await?;
return Ok(updated);
}
// 获取当前计划限额
@@ -281,20 +300,119 @@ pub async fn increment_dimension_by(
Ok(())
}
/// 管理员切换用户订阅计划(仅 super_admin 调用)
///
/// 1. 验证目标 plan_id 存在且 active
/// 2. 取消用户当前 active 订阅
/// 3. 创建新订阅status=active, 30 天周期)
/// 4. 更新当月 usage quota 的 max_* 列
pub async fn admin_switch_plan(
pool: &PgPool,
account_id: &str,
target_plan_id: &str,
) -> SaasResult<Subscription> {
// 1. 验证目标计划存在且 active
let plan = get_plan(pool, target_plan_id).await?
.ok_or_else(|| crate::error::SaasError::NotFound("目标计划不存在或已下架".into()))?;
// 2. 检查是否已订阅该计划
if let Some(current_sub) = get_active_subscription(pool, account_id).await? {
if current_sub.plan_id == target_plan_id {
return Err(crate::error::SaasError::InvalidInput("用户已订阅该计划".into()));
}
}
let mut tx = pool.begin().await
.map_err(|e| crate::error::SaasError::Internal(format!("开启事务失败: {}", e)))?;
let now = chrono::Utc::now();
// 3. 取消当前活跃订阅
sqlx::query(
"UPDATE billing_subscriptions SET status = 'canceled', canceled_at = $1, updated_at = $1 \
WHERE account_id = $2 AND status IN ('trial', 'active', 'past_due')"
)
.bind(&now)
.bind(account_id)
.execute(&mut *tx)
.await?;
// 4. 创建新订阅
let sub_id = uuid::Uuid::new_v4().to_string();
let period_start = now;
let period_end = now + chrono::Duration::days(30);
sqlx::query(
"INSERT INTO billing_subscriptions \
(id, account_id, plan_id, status, current_period_start, current_period_end, created_at, updated_at) \
VALUES ($1, $2, $3, 'active', $4, $5, $6, $6)"
)
.bind(&sub_id)
.bind(account_id)
.bind(&target_plan_id)
.bind(&period_start)
.bind(&period_end)
.bind(&now)
.execute(&mut *tx)
.await?;
// 5. 同步当月 usage quota 的 max_* 列
let limits: PlanLimits = serde_json::from_value(plan.limits.clone())
.unwrap_or_else(|_| PlanLimits::free());
sqlx::query(
"UPDATE billing_usage_quotas SET max_input_tokens=$1, max_output_tokens=$2, \
max_relay_requests=$3, max_hand_executions=$4, max_pipeline_runs=$5, updated_at=NOW() \
WHERE account_id=$6 AND period_start = DATE_TRUNC('month', NOW())"
)
.bind(limits.max_input_tokens_monthly)
.bind(limits.max_output_tokens_monthly)
.bind(limits.max_relay_requests_monthly)
.bind(limits.max_hand_executions_monthly)
.bind(limits.max_pipeline_runs_monthly)
.bind(account_id)
.execute(&mut *tx)
.await?;
tx.commit().await
.map_err(|e| crate::error::SaasError::Internal(format!("事务提交失败: {}", e)))?;
// 查询返回新订阅
let sub = sqlx::query_as::<_, Subscription>(
"SELECT * FROM billing_subscriptions WHERE id = $1"
)
.bind(&sub_id)
.fetch_one(pool)
.await?;
Ok(sub)
}
/// 检查用量配额
///
/// P1-7 修复: 从当前 Plan 读取限额(而非 stale 的 usage 表冗余列)
/// P1-8 修复: 支持 relay_requests + input_tokens 双维度检查
pub async fn check_quota(
pool: &PgPool,
account_id: &str,
role: &str,
quota_type: &str,
) -> SaasResult<QuotaCheck> {
// P2-14 修复: super_admin 不受配额限制
if role == "super_admin" {
return Ok(QuotaCheck { allowed: true, reason: None, current: 0, limit: None, remaining: None });
}
let usage = get_or_create_usage(pool, account_id).await?;
// 从当前 Plan 读取真实限额,而非 usage 表的 stale 冗余列
let plan = get_account_plan(pool, account_id).await?;
let limits: crate::billing::types::PlanLimits = serde_json::from_value(plan.limits)
.unwrap_or_else(|_| crate::billing::types::PlanLimits::free());
let (current, limit) = match quota_type {
"input_tokens" => (usage.input_tokens, usage.max_input_tokens),
"output_tokens" => (usage.output_tokens, usage.max_output_tokens),
"relay_requests" => (usage.relay_requests as i64, usage.max_relay_requests.map(|v| v as i64)),
"hand_executions" => (usage.hand_executions as i64, usage.max_hand_executions.map(|v| v as i64)),
"pipeline_runs" => (usage.pipeline_runs as i64, usage.max_pipeline_runs.map(|v| v as i64)),
"input_tokens" => (usage.input_tokens, limits.max_input_tokens_monthly),
"output_tokens" => (usage.output_tokens, limits.max_output_tokens_monthly),
"relay_requests" => (usage.relay_requests as i64, limits.max_relay_requests_monthly.map(|v| v as i64)),
"hand_executions" => (usage.hand_executions as i64, limits.max_hand_executions_monthly.map(|v| v as i64)),
"pipeline_runs" => (usage.pipeline_runs as i64, limits.max_pipeline_runs_monthly.map(|v| v as i64)),
_ => return Ok(QuotaCheck {
allowed: true,
reason: None,
@@ -309,7 +427,7 @@ pub async fn check_quota(
Ok(QuotaCheck {
allowed,
reason: if !allowed { Some(format!("{} 配额已用尽", quota_type)) } else { None },
reason: if !allowed { Some(format!("{} 配额已用尽 (已用 {}/{})", quota_type, current, limit.unwrap_or(0))) } else { None },
current,
limit,
remaining,

View File

@@ -155,7 +155,14 @@ pub struct CreatePaymentRequest {
#[derive(Debug, Serialize)]
pub struct PaymentResult {
pub payment_id: String,
pub invoice_id: String,
pub trade_no: String,
pub pay_url: String,
pub amount_cents: i32,
}
/// 管理员切换计划请求
#[derive(Debug, Deserialize)]
pub struct AdminSwitchPlanRequest {
pub plan_id: String,
}

View File

@@ -248,6 +248,37 @@ impl AppCache {
.map(|r| r.value().clone())
}
/// 按别名查找模型 — 用于向后兼容旧模型 ID (如 "glm-4-flash" → "glm-4-flash-250414")
/// 先按 alias 字段精确匹配,再按 model_id 前缀匹配(去掉日期后缀)
pub fn resolve_model(&self, model_name: &str) -> Option<CachedModel> {
// 1. 直接 model_id 查找
if let Some(m) = self.get_model(model_name) {
return Some(m);
}
// 2. 按 alias 精确匹配
for entry in self.models.iter() {
if entry.value().enabled && entry.value().alias == model_name {
return Some(entry.value().clone());
}
}
// 3. 前缀匹配: "glm-4-flash" 匹配 "glm-4-flash-250414" 等带后缀的模型
for entry in self.models.iter() {
let mid = &entry.value().model_id;
if entry.value().enabled
&& (mid.starts_with(&format!("{}-", model_name))
|| mid.starts_with(&format!("{}v", model_name)))
{
tracing::info!(
"Model alias resolved: {} → {}",
model_name,
mid
);
return Some(entry.value().clone());
}
}
None
}
/// 按 provider id 查找已启用的 Provider。O(1) DashMap 查找。
pub fn get_provider(&self, provider_id: &str) -> Option<CachedProvider> {
self.providers.get(provider_id)

View File

@@ -396,6 +396,23 @@ impl SaaSConfig {
config.database.url = db_url;
}
// Config validation
if config.auth.jwt_expiration_hours < 1 {
anyhow::bail!(
"auth.jwt_expiration_hours must be >= 1, got {}",
config.auth.jwt_expiration_hours
);
}
if config.database.max_connections == 0 {
anyhow::bail!("database.max_connections must be > 0");
}
if config.database.min_connections > config.database.max_connections {
anyhow::bail!(
"database.min_connections ({}) must be <= max_connections ({})",
config.database.min_connections, config.database.max_connections
);
}
Ok(config)
}
@@ -465,22 +482,25 @@ impl SaaSConfig {
/// 替换 TOML 配置文件中的 `${ENV_VAR}` 模式为环境变量值
/// 未设置的环境变量保留原文,后续数据库连接或 JWT 初始化时会报明确错误
///
/// 注意: 使用 chars() 迭代器而非 bytes() 来正确处理多字节 UTF-8 字符(如中文),
/// 避免将多字节 UTF-8 序列的每个字节单独 `as char` 导致编码损坏。
fn interpolate_env_vars(content: &str) -> String {
let mut result = String::with_capacity(content.len());
let bytes = content.as_bytes();
let chars: Vec<char> = content.chars().collect();
let mut i = 0;
while i < bytes.len() {
if i + 1 < bytes.len() && bytes[i] == b'$' && bytes[i + 1] == b'{' {
while i < chars.len() {
if i + 1 < chars.len() && chars[i] == '$' && chars[i + 1] == '{' {
let start = i + 2;
let mut end = start;
while end < bytes.len()
&& (bytes[end].is_ascii_alphanumeric() || bytes[end] == b'_')
while end < chars.len()
&& (chars[end].is_ascii_alphanumeric() || chars[end] == '_')
{
end += 1;
}
if end < bytes.len() && bytes[end] == b'}' {
let var_name = std::str::from_utf8(&bytes[start..end]).unwrap_or("");
match std::env::var(var_name) {
if end < chars.len() && chars[end] == '}' {
let var_name: String = chars[start..end].iter().collect();
match std::env::var(&var_name) {
Ok(val) => {
tracing::debug!("Config: ${{{}}} → resolved ({} bytes)", var_name, val.len());
result.push_str(&val);
@@ -492,11 +512,11 @@ fn interpolate_env_vars(content: &str) -> String {
}
i = end + 1;
} else {
result.push(bytes[i] as char);
result.push(chars[i]);
i += 1;
}
} else {
result.push(bytes[i] as char);
result.push(chars[i]);
i += 1;
}
}

View File

@@ -38,10 +38,25 @@ pub async fn init_db(config: &DatabaseConfig) -> SaasResult<PgPool> {
.connect(&database_url)
.await?;
// 验证数据库编码为 UTF8 — 中文 Windows (GBK/代码页936) 可能导致默认非 UTF8
let encoding: (String,) = sqlx::query_as("SHOW server_encoding")
.fetch_one(&pool)
.await
.unwrap_or(("UNKNOWN".to_string(),));
if encoding.0.to_uppercase() != "UTF8" {
tracing::error!(
"⚠ 数据库编码为 '{}',非 UTF8中文数据将损坏。请使用 CREATE DATABASE ... WITH ENCODING='UTF8' 重建数据库。",
encoding.0
);
} else {
tracing::info!("Database encoding: {}", encoding.0);
}
run_migrations(&pool).await?;
ensure_security_columns(&pool).await?;
seed_admin_account(&pool).await?;
seed_builtin_prompts(&pool).await?;
seed_knowledge_categories(&pool).await?;
seed_builtin_industries(&pool).await?;
seed_demo_data(&pool).await?;
fix_seed_data(&pool).await?;
@@ -727,7 +742,7 @@ async fn seed_demo_data(pool: &PgPool) -> SaasResult<()> {
let id = format!("cfg-{}-{}", cat, key);
sqlx::query(
"INSERT INTO config_items (id, category, key_path, value_type, current_value, default_value, source, description, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, 'local', $7, $8, $8) ON CONFLICT (id) DO NOTHING"
VALUES ($1, $2, $3, $4, $5, $6, 'local', $7, $8, $8) ON CONFLICT (category, key_path) DO NOTHING"
).bind(&id).bind(cat).bind(key).bind(vtype).bind(current).bind(default).bind(desc).bind(&ts)
.execute(pool).await?;
}
@@ -839,6 +854,7 @@ async fn fix_seed_data(pool: &PgPool) -> SaasResult<()> {
let admin_ids: Vec<String> = admins.into_iter().map(|(id,)| id).collect();
// 2. 更新 config_items 分类名(旧 → 新)
// 先删除目标 (category, key_path) 已存在的旧 category 行,避免唯一约束冲突
let category_mappings = [
("server", "general"),
("llm", "model"),
@@ -847,6 +863,13 @@ async fn fix_seed_data(pool: &PgPool) -> SaasResult<()> {
("security", "rate_limit"),
];
for (old_cat, new_cat) in &category_mappings {
// 删除旧 category 中与目标 category key_path 冲突的行
sqlx::query(
"DELETE FROM config_items WHERE category = $1 AND key_path IN \
(SELECT key_path FROM config_items WHERE category = $2)"
).bind(old_cat).bind(new_cat)
.execute(pool).await?;
let result = sqlx::query(
"UPDATE config_items SET category = $1, updated_at = $2 WHERE category = $3"
).bind(new_cat).bind(&now).bind(old_cat)
@@ -874,7 +897,7 @@ async fn fix_seed_data(pool: &PgPool) -> SaasResult<()> {
let id = format!("cfg-{}-{}", cat, key);
sqlx::query(
"INSERT INTO config_items (id, category, key_path, value_type, current_value, default_value, source, description, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, 'local', $7, $8, $8) ON CONFLICT (id) DO NOTHING"
VALUES ($1, $2, $3, $4, $5, $6, 'local', $7, $8, $8) ON CONFLICT (category, key_path) DO NOTHING"
).bind(&id).bind(cat).bind(key).bind(vtype).bind(current).bind(default).bind(desc).bind(&now)
.execute(pool).await?;
}
@@ -1004,6 +1027,31 @@ async fn seed_builtin_industries(pool: &PgPool) -> SaasResult<()> {
crate::industry::service::seed_builtin_industries(pool).await
}
/// 种子化知识库默认分类(幂等)
async fn seed_knowledge_categories(pool: &PgPool) -> SaasResult<()> {
let now = chrono::Utc::now();
let categories = [
("seed", "种子知识", "系统内置的行业基础知识"),
("uploaded", "上传文档", "用户上传的文档知识"),
("distillation", "蒸馏知识", "API 蒸馏生成的知识"),
];
for (id, name, desc) in &categories {
sqlx::query(
"INSERT INTO knowledge_categories (id, name, description, created_at, updated_at) \
VALUES ($1, $2, $3, $4, $4) \
ON CONFLICT (id) DO NOTHING"
)
.bind(id)
.bind(name)
.bind(desc)
.bind(&now)
.execute(pool)
.await?;
}
tracing::debug!("Seeded knowledge categories");
Ok(())
}
#[cfg(test)]
mod tests {
// PostgreSQL 单元测试需要真实数据库连接,此处保留接口兼容

View File

@@ -15,24 +15,48 @@ pub async fn list_industries(
) -> SaasResult<PaginatedResponse<IndustryListItem>> {
let (page, page_size, offset) = normalize_pagination(query.page, query.page_size);
// 动态构建参数化查询 — 所有用户输入通过 $N 绑定
let mut where_parts: Vec<String> = vec!["1=1".to_string()];
let mut param_idx = 3; // $1=LIMIT, $2=OFFSET, $3+=filters
let status_param: Option<String> = query.status.clone();
let source_param: Option<String> = query.source.clone();
// 构建 WHERE 条件 — 每个查询独立的参数编号
let mut where_parts: Vec<String> = vec!["1=1".to_string()];
// count 查询:参数从 $1 开始
let mut count_params: Vec<String> = Vec::new();
let mut count_idx = 1;
if status_param.is_some() {
where_parts.push(format!("status = ${}", param_idx));
param_idx += 1;
count_params.push(format!("status = ${}", count_idx));
count_idx += 1;
}
if source_param.is_some() {
where_parts.push(format!("source = ${}", param_idx));
param_idx += 1;
count_params.push(format!("source = ${}", count_idx));
count_idx += 1;
}
let where_sql = where_parts.join(" AND ");
let count_where = if count_params.is_empty() {
"1=1".to_string()
} else {
format!("1=1 AND {}", count_params.join(" AND "))
};
// items 查询:$1=LIMIT, $2=OFFSET, $3+=filters
let mut items_params: Vec<String> = Vec::new();
let mut items_idx = 3;
if status_param.is_some() {
items_params.push(format!("status = ${}", items_idx));
items_idx += 1;
}
if source_param.is_some() {
items_params.push(format!("source = ${}", items_idx));
items_idx += 1;
}
let items_where = if items_params.is_empty() {
"1=1".to_string()
} else {
format!("1=1 AND {}", items_params.join(" AND "))
};
// count 查询
let count_sql = format!("SELECT COUNT(*) FROM industries WHERE {}", where_sql);
let count_sql = format!("SELECT COUNT(*) FROM industries WHERE {}", count_where);
let mut count_q = sqlx::query_scalar::<_, i64>(&count_sql);
if let Some(ref s) = status_param { count_q = count_q.bind(s); }
if let Some(ref s) = source_param { count_q = count_q.bind(s); }
@@ -44,7 +68,7 @@ pub async fn list_industries(
COALESCE(jsonb_array_length(keywords), 0) as keywords_count, \
created_at, updated_at \
FROM industries WHERE {} ORDER BY source, id LIMIT $1 OFFSET $2",
where_sql
items_where
);
let mut items_q = sqlx::query_as::<_, IndustryListItem>(&items_sql)
.bind(page_size as i64)

View File

@@ -29,7 +29,7 @@ pub struct IndustryListItem {
pub description: String,
pub status: String,
pub source: String,
pub keywords_count: i64,
pub keywords_count: i32,
pub created_at: chrono::DateTime<chrono::Utc>,
pub updated_at: chrono::DateTime<chrono::Utc>,
}

View File

@@ -804,7 +804,7 @@ async fn handle_document_upload(
// 创建知识条目
let item_req = CreateItemRequest {
category_id: "uploaded".to_string(), // TODO: 从上传参数获取
category_id: "uploaded".to_string(), // FUTURE: post-release category_id from upload params
title: doc.title.clone(),
content,
keywords: None,

View File

@@ -632,6 +632,8 @@ pub async fn unified_search(
// === 种子知识冷启动 ===
/// 为指定行业插入种子知识(幂等)
///
/// P1-6 修复: 同时创建 knowledge_chunks 以支持搜索
pub async fn seed_knowledge(
pool: &PgPool,
industry_id: &str,
@@ -684,6 +686,24 @@ pub async fn seed_knowledge(
.execute(pool)
.await?;
// 创建 chunks 以支持搜索(与 distill_knowledge worker 一致)
let chunks = chunk_content(content, 500, 50);
for (chunk_idx, chunk_text) in chunks.iter().enumerate() {
let chunk_id = uuid::Uuid::new_v4().to_string();
sqlx::query(
"INSERT INTO knowledge_chunks (id, item_id, content, keywords, chunk_index, created_at) \
VALUES ($1, $2, $3, $4, $5, $6)"
)
.bind(&chunk_id)
.bind(&id)
.bind(chunk_text)
.bind(&kw_json)
.bind(chunk_idx as i32)
.bind(&now)
.execute(pool)
.await?;
}
created += 1;
}
Ok(created)

View File

@@ -99,6 +99,8 @@ async fn main() -> anyhow::Result<()> {
if let Err(e) = zclaw_saas::crypto::migrate_legacy_totp_secrets(&db, &enc_key).await {
tracing::warn!("TOTP legacy migration check failed: {}", e);
}
// Self-heal: re-encrypt provider keys with current key
zclaw_saas::relay::key_pool::heal_provider_keys(&db, &enc_key).await;
} else {
drop(config_for_migration);
}
@@ -307,10 +309,34 @@ async fn build_router(state: AppState) -> axum::Router {
.unwrap_or(false);
if config.server.cors_origins.is_empty() {
if is_dev {
// Dev mode: use explicit localhost origins (Any + credentials violates CORS spec)
let dev_origins: Vec<HeaderValue> = [
"http://localhost:1420",
"http://localhost:5173",
"http://127.0.0.1:1420",
"http://127.0.0.1:5173",
"http://localhost:8080",
"http://127.0.0.1:8080",
"tauri://localhost",
"https://tauri.localhost",
].iter()
.filter_map(|o| o.parse::<HeaderValue>().ok())
.collect();
CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any)
.allow_origin(dev_origins)
.allow_methods([
axum::http::Method::GET,
axum::http::Method::POST,
axum::http::Method::PUT,
axum::http::Method::PATCH,
axum::http::Method::DELETE,
axum::http::Method::OPTIONS,
])
.allow_headers([
axum::http::header::AUTHORIZATION,
axum::http::header::CONTENT_TYPE,
axum::http::header::COOKIE,
])
.allow_credentials(true)
} else {
tracing::error!("生产环境必须配置 server.cors_origins不能使用 allow_origin(Any)");
@@ -350,6 +376,10 @@ async fn build_router(state: AppState) -> axum::Router {
let protected_routes = zclaw_saas::auth::protected_routes()
.merge(zclaw_saas::account::routes())
.merge(
zclaw_saas::account::admin_routes()
.layer(middleware::from_fn(zclaw_saas::auth::admin_guard_middleware))
)
.merge(zclaw_saas::model_config::routes())
// relay::routes() 不在此合并 — SSE 端点需要更长超时,在最终 Router 单独合并
.merge(zclaw_saas::migration::routes())
@@ -359,6 +389,10 @@ async fn build_router(state: AppState) -> axum::Router {
.merge(zclaw_saas::scheduled_task::routes())
.merge(zclaw_saas::telemetry::routes())
.merge(zclaw_saas::billing::routes())
.merge(
zclaw_saas::billing::admin_routes()
.layer(middleware::from_fn(zclaw_saas::auth::admin_guard_middleware))
)
.merge(zclaw_saas::knowledge::routes())
.merge(zclaw_saas::industry::routes())
.layer(middleware::from_fn_with_state(

View File

@@ -119,13 +119,13 @@ pub async fn quota_check_middleware(
}
// 从扩展中获取认证上下文
let account_id = match req.extensions().get::<AuthContext>() {
Some(ctx) => ctx.account_id.clone(),
let (account_id, role) = match req.extensions().get::<AuthContext>() {
Some(ctx) => (ctx.account_id.clone(), ctx.role.clone()),
None => return next.run(req).await,
};
// 检查 relay_requests 配额
match crate::billing::service::check_quota(&state.db, &account_id, "relay_requests").await {
match crate::billing::service::check_quota(&state.db, &account_id, &role, "relay_requests").await {
Ok(check) if !check.allowed => {
tracing::warn!(
"Quota exceeded for account {}: {} ({}/{})",
@@ -145,6 +145,26 @@ pub async fn quota_check_middleware(
_ => {}
}
// P1-8 修复: 同时检查 input_tokens 配额
match crate::billing::service::check_quota(&state.db, &account_id, &role, "input_tokens").await {
Ok(check) if !check.allowed => {
tracing::warn!(
"Token quota exceeded for account {}: {} ({}/{})",
account_id,
check.reason.as_deref().unwrap_or("Token配额已用尽"),
check.current,
check.limit.map(|l| l.to_string()).unwrap_or_else(|| "".into()),
);
return SaasError::RateLimited(
check.reason.unwrap_or_else(|| "月度 Token 配额已用尽".into()),
).into_response();
}
Err(e) => {
tracing::warn!("Token quota check failed for account {}: {}", account_id, e);
}
_ => {}
}
next.run(req).await
}

View File

@@ -258,7 +258,8 @@ pub async fn seed_default_config_items(db: &PgPool) -> SaasResult<usize> {
let id = uuid::Uuid::new_v4().to_string();
sqlx::query(
"INSERT INTO config_items (id, category, key_path, value_type, current_value, default_value, source, description, requires_restart, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, 'local', $7, false, $8, $8)"
VALUES ($1, $2, $3, $4, $5, $6, 'local', $7, false, $8, $8)
ON CONFLICT (category, key_path) DO NOTHING"
)
.bind(&id).bind(category).bind(key_path).bind(value_type)
.bind(current_value).bind(default_value).bind(description).bind(&now)
@@ -374,7 +375,8 @@ pub async fn sync_config(
let category = parts.first().unwrap_or(&"general").to_string();
sqlx::query(
"INSERT INTO config_items (id, category, key_path, value_type, current_value, default_value, source, description, requires_restart, created_at, updated_at)
VALUES ($1, $2, $3, 'string', $4, $4, 'local', '客户端推送', false, $5, $5)"
VALUES ($1, $2, $3, 'string', $4, $4, 'local', '客户端推送', false, $5, $5)
ON CONFLICT (category, key_path) DO NOTHING"
)
.bind(&id).bind(&category).bind(key).bind(val).bind(&now)
.execute(db).await?;

View File

@@ -419,21 +419,33 @@ pub async fn revoke_account_api_key(
pub async fn get_usage_stats(
db: &PgPool, account_id: &str, query: &UsageQuery,
) -> SaasResult<UsageStats> {
// Optional date filters: pass as TEXT with explicit $N::timestamptz SQL cast.
// This avoids the sqlx NULL-without-type-OID problem — PG's ::timestamptz
// gives a typed NULL even when sqlx sends an untyped NULL.
// === Totals: from billing_usage_quotas (authoritative source) ===
// billing_usage_quotas is written to on every relay request (both JSON and SSE),
// whereas usage_records has 0 tokens for SSE requests. Use billing as the primary source.
let billing_row = sqlx::query(
"SELECT COALESCE(SUM(input_tokens), 0)::bigint,
COALESCE(SUM(output_tokens), 0)::bigint,
COALESCE(SUM(relay_requests), 0)::bigint
FROM billing_usage_quotas WHERE account_id = $1"
)
.bind(account_id)
.fetch_one(db)
.await?;
let total_input: i64 = billing_row.try_get(0).unwrap_or(0);
let total_output: i64 = billing_row.try_get(1).unwrap_or(0);
let total_requests: i64 = billing_row.try_get(2).unwrap_or(0);
// === Breakdowns: from usage_records (per-request detail) ===
// Optional date filters: pass as TEXT with explicit SQL cast.
let from_str: Option<&str> = query.from.as_deref();
// For 'to' date-only strings, append T23:59:59 to include the entire day
let to_str: Option<String> = query.to.as_ref().map(|s| {
if s.len() == 10 { format!("{}T23:59:59", s) } else { s.clone() }
});
// Build SQL dynamically to avoid sqlx NULL-without-type-OID problem entirely.
// Date parameters are injected as SQL literals (validated above via chrono parse).
// Only account_id uses parameterized binding to prevent SQL injection on user input.
// Build SQL dynamically for usage_records breakdowns.
// Date parameters are injected as SQL literals (validated via chrono parse).
let mut where_parts = vec![format!("account_id = '{}'", account_id.replace('\'', "''"))];
if let Some(f) = from_str {
// Validate: must be parseable as a date
let valid = chrono::NaiveDate::parse_from_str(f, "%Y-%m-%d").is_ok()
|| chrono::NaiveDateTime::parse_from_str(f, "%Y-%m-%dT%H:%M:%S%.f").is_ok();
if !valid {
@@ -457,15 +469,6 @@ pub async fn get_usage_stats(
}
let where_clause = where_parts.join(" AND ");
let total_sql = format!(
"SELECT COUNT(*)::bigint, COALESCE(SUM(input_tokens), 0)::bigint, COALESCE(SUM(output_tokens), 0)::bigint
FROM usage_records WHERE {}", where_clause
);
let row = sqlx::query(&total_sql).fetch_one(db).await?;
let total_requests: i64 = row.try_get(0).unwrap_or(0);
let total_input: i64 = row.try_get(1).unwrap_or(0);
let total_output: i64 = row.try_get(2).unwrap_or(0);
// 按模型统计
let by_model_sql = format!(
"SELECT provider_id, model_id, COUNT(*)::bigint AS request_count, COALESCE(SUM(input_tokens), 0)::bigint AS input_tokens, COALESCE(SUM(output_tokens), 0)::bigint AS output_tokens

View File

@@ -68,7 +68,7 @@ pub async fn get_prompt(
Ok(Json(service::get_template_by_name(&state.db, &name).await?))
}
/// PUT /api/v1/prompts/{name} — 更新模板元数据
/// PUT /api/v1/prompts/{name} — 更新模板元数据 + 可选自动创建新版本
pub async fn update_prompt(
State(state): State<AppState>,
Extension(ctx): Extension<AuthContext>,
@@ -82,6 +82,11 @@ pub async fn update_prompt(
&state.db, &tmpl.id,
req.description.as_deref(),
req.status.as_deref(),
req.system_prompt.as_deref(),
req.user_prompt_template.as_deref(),
req.variables.clone(),
req.changelog.as_deref(),
req.min_app_version.as_deref(),
).await?;
log_operation(&state.db, &ctx.account_id, "prompt.update", "prompt", &tmpl.id,
@@ -99,7 +104,7 @@ pub async fn archive_prompt(
check_permission(&ctx, "prompt:admin")?;
let tmpl = service::get_template_by_name(&state.db, &name).await?;
let result = service::update_template(&state.db, &tmpl.id, None, Some("archived")).await?;
let result = service::update_template(&state.db, &tmpl.id, None, Some("archived"), None, None, None, None, None).await?;
log_operation(&state.db, &ctx.account_id, "prompt.archive", "prompt", &tmpl.id, None, ctx.client_ip.as_deref()).await?;

View File

@@ -108,12 +108,20 @@ pub async fn list_templates(
Ok(PaginatedResponse { items, total, page, page_size })
}
/// 更新模板元数据(不修改内容)
/// 更新模板元数据 + 可选自动创建新版本
///
/// 当传入 `system_prompt` 时,自动创建新版本并递增 `current_version`。
/// 仅更新 `description`/`status` 时不会递增版本号。
pub async fn update_template(
db: &PgPool,
id: &str,
description: Option<&str>,
status: Option<&str>,
system_prompt: Option<&str>,
user_prompt_template: Option<&str>,
variables: Option<serde_json::Value>,
changelog: Option<&str>,
min_app_version: Option<&str>,
) -> SaasResult<PromptTemplateInfo> {
let now = chrono::Utc::now();
@@ -130,6 +138,11 @@ pub async fn update_template(
.bind(st).bind(&now).bind(id).execute(db).await?;
}
// Auto-create version when content is provided
if let Some(sp) = system_prompt {
create_version(db, id, sp, user_prompt_template, variables, changelog, min_app_version).await?;
}
get_template(db, id).await
}

Some files were not shown because too many files have changed in this diff Show More