fix(growth): HIGH-6 修复 extract_combined 合并提取空壳
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled
根因: growth.rs 构造 CombinedExtraction 时硬编码 experiences: Vec::new() 和 profile_signals: default(),导致 L1 结构化经验不被提取、L2 技能进化 没有输入数据、整个进化引擎无法端到端工作。 修复: - extractor.rs: 添加 COMBINED_EXTRACTION_PROMPT 统一 prompt,单次 LLM 调用 同时输出 memories + experiences + profile_signals - extractor.rs: 添加 parse_combined_response() 解析 LLM JSON 响应 - LlmDriverForExtraction trait: 添加 extract_with_prompt() 方法(默认不支持, 退化到现有 extract() + 启发式推断) - MemoryExtractor: 添加 extract_combined() 方法,优先单次调用,失败则退化 - growth.rs: extract_combined() 使用新的合并提取替代硬编码空值 - TauriExtractionDriver: 实现 extract_with_prompt() - ProfileSignals: 添加 has_any_signal() 方法 - types.rs: ProfileSignals 无 structural 变化(字段已存在) 测试: 4 个新测试(parse_combined_response_full/minimal/invalid + extract_combined_fallback),11 个 extractor 测试全部通过
This commit is contained in:
@@ -225,6 +225,69 @@ impl LlmDriverForExtraction for TauriExtractionDriver {
|
||||
|
||||
Ok(memories)
|
||||
}
|
||||
|
||||
async fn extract_with_prompt(
|
||||
&self,
|
||||
messages: &[Message],
|
||||
system_prompt: &str,
|
||||
user_prompt: &str,
|
||||
) -> Result<String> {
|
||||
if messages.len() < 2 {
|
||||
return Err(zclaw_types::Error::msg(
|
||||
"Too few messages for combined extraction",
|
||||
));
|
||||
}
|
||||
|
||||
tracing::debug!(
|
||||
"[TauriExtractionDriver] Combined extraction from {} messages",
|
||||
messages.len()
|
||||
);
|
||||
|
||||
let request = CompletionRequest {
|
||||
model: self.model.clone(),
|
||||
system: Some(system_prompt.to_string()),
|
||||
messages: vec![Message::user(user_prompt.to_string())],
|
||||
tools: Vec::new(),
|
||||
max_tokens: Some(3000),
|
||||
temperature: Some(0.3),
|
||||
stop: Vec::new(),
|
||||
stream: false,
|
||||
thinking_enabled: false,
|
||||
reasoning_effort: None,
|
||||
plan_mode: false,
|
||||
};
|
||||
|
||||
let response = self.driver.complete(request).await.map_err(|e| {
|
||||
tracing::error!(
|
||||
"[TauriExtractionDriver] Combined extraction LLM call failed: {}",
|
||||
e
|
||||
);
|
||||
e
|
||||
})?;
|
||||
|
||||
let response_text: String = response
|
||||
.content
|
||||
.into_iter()
|
||||
.filter_map(|block| match block {
|
||||
ContentBlock::Text { text } => Some(text),
|
||||
_ => None,
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join("");
|
||||
|
||||
if response_text.is_empty() {
|
||||
return Err(zclaw_types::Error::msg(
|
||||
"Empty response from LLM for combined extraction",
|
||||
));
|
||||
}
|
||||
|
||||
tracing::info!(
|
||||
"[TauriExtractionDriver] Combined extraction response: {} chars",
|
||||
response_text.len()
|
||||
);
|
||||
|
||||
Ok(response_text)
|
||||
}
|
||||
}
|
||||
|
||||
/// Global extraction driver instance (legacy path, kept for compatibility).
|
||||
|
||||
Reference in New Issue
Block a user