refactor: 统一项目名称从OpenFang到ZCLAW
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled

重构所有代码和文档中的项目名称,将OpenFang统一更新为ZCLAW。包括:
- 配置文件中的项目名称
- 代码注释和文档引用
- 环境变量和路径
- 类型定义和接口名称
- 测试用例和模拟数据

同时优化部分代码结构,移除未使用的模块,并更新相关依赖项。
This commit is contained in:
iven
2026-03-27 07:36:03 +08:00
parent 4b08804aa9
commit 0d4fa96b82
226 changed files with 7288 additions and 5788 deletions

View File

@@ -0,0 +1,153 @@
//! Intelligence Hooks - Pre/Post conversation integration
//!
//! Bridges the intelligence layer modules (identity, memory, heartbeat, reflection)
//! into the kernel's chat flow at the Tauri command boundary.
//!
//! Architecture: kernel_commands.rs → intelligence_hooks → intelligence modules → Viking/Kernel
use tracing::debug;
use crate::intelligence::identity::IdentityManagerState;
use crate::intelligence::heartbeat::HeartbeatEngineState;
use crate::intelligence::reflection::ReflectionEngineState;
/// Run pre-conversation intelligence hooks
///
/// 1. Build memory context from VikingStorage (FTS5 + TF-IDF + Embedding)
/// 2. Build identity-enhanced system prompt (SOUL.md + instructions)
///
/// Returns the enhanced system prompt that should be passed to the kernel.
pub async fn pre_conversation_hook(
agent_id: &str,
user_message: &str,
identity_state: &IdentityManagerState,
) -> Result<String, String> {
// Step 1: Build memory context from Viking storage
let memory_context = build_memory_context(agent_id, user_message).await
.unwrap_or_default();
// Step 2: Build identity-enhanced system prompt
let enhanced_prompt = build_identity_prompt(agent_id, &memory_context, identity_state)
.await
.unwrap_or_default();
Ok(enhanced_prompt)
}
/// Run post-conversation intelligence hooks
///
/// 1. Record interaction for heartbeat engine
/// 2. Record conversation for reflection engine, trigger reflection if needed
pub async fn post_conversation_hook(
agent_id: &str,
_heartbeat_state: &HeartbeatEngineState,
reflection_state: &ReflectionEngineState,
) {
// Step 1: Record interaction for heartbeat
crate::intelligence::heartbeat::record_interaction(agent_id);
debug!("[intelligence_hooks] Recorded interaction for agent: {}", agent_id);
// Step 2: Record conversation for reflection
// tokio::sync::Mutex::lock() returns MutexGuard directly (panics on poison)
let mut engine = reflection_state.lock().await;
engine.record_conversation();
debug!(
"[intelligence_hooks] Conversation count updated for agent: {}",
agent_id
);
if engine.should_reflect() {
debug!(
"[intelligence_hooks] Reflection threshold reached for agent: {}",
agent_id
);
let reflection_result = engine.reflect(agent_id, &[]);
debug!(
"[intelligence_hooks] Reflection completed: {} patterns, {} suggestions",
reflection_result.patterns.len(),
reflection_result.improvements.len()
);
}
}
/// Build memory context by searching VikingStorage for relevant memories
async fn build_memory_context(
agent_id: &str,
user_message: &str,
) -> Result<String, String> {
// Try Viking storage (has FTS5 + TF-IDF + Embedding)
let storage = crate::viking_commands::get_storage().await?;
// FindOptions from zclaw_growth
let options = zclaw_growth::FindOptions {
scope: Some(format!("agent://{}", agent_id)),
limit: Some(8),
min_similarity: Some(0.2),
};
// find is on the VikingStorage trait — call via trait to dispatch correctly
let results: Vec<zclaw_growth::MemoryEntry> =
zclaw_growth::VikingStorage::find(storage.as_ref(), user_message, options)
.await
.map_err(|e| format!("Memory search failed: {}", e))?;
if results.is_empty() {
return Ok(String::new());
}
// Format memories into context string
let mut context = String::from("## 相关记忆\n\n");
let mut token_estimate: usize = 0;
let max_tokens: usize = 500;
for entry in &results {
// Prefer overview (L1 summary) over full content
// overview is Option<String> — use as_deref to get Option<&str>
let overview_str = entry.overview.as_deref().unwrap_or("");
let text = if !overview_str.is_empty() {
overview_str
} else {
&entry.content
};
// Truncate long entries
let truncated = if text.len() > 100 {
format!("{}...", &text[..100])
} else {
text.to_string()
};
// Simple token estimate (~1.5 tokens per CJK char, ~0.25 per other)
let tokens: usize = truncated.chars()
.map(|c: char| if c.is_ascii() { 1 } else { 2 })
.sum();
if token_estimate + tokens > max_tokens {
break;
}
context.push_str(&format!("- [{}] {}\n", entry.memory_type, truncated));
token_estimate += tokens;
}
Ok(context)
}
/// Build identity-enhanced system prompt
async fn build_identity_prompt(
agent_id: &str,
memory_context: &str,
identity_state: &IdentityManagerState,
) -> Result<String, String> {
// IdentityManagerState is Arc<tokio::sync::Mutex<AgentIdentityManager>>
// tokio::sync::Mutex::lock() returns MutexGuard directly
let mut manager = identity_state.lock().await;
let prompt = manager.build_system_prompt(
agent_id,
if memory_context.is_empty() { None } else { Some(memory_context) },
);
Ok(prompt)
}