feat(chat): LLM 动态对话建议 — 替换硬编码关键词匹配
Some checks failed
CI / Lint & TypeCheck (push) Has been cancelled
CI / Unit Tests (push) Has been cancelled
CI / Build Frontend (push) Has been cancelled
CI / Rust Check (push) Has been cancelled
CI / Security Scan (push) Has been cancelled
CI / E2E Tests (push) Has been cancelled

AI 回复结束后,将最近对话发给 LLM 生成 3 个上下文相关的后续问题,
替换原有的"继续深入分析"等泛泛默认建议。

变更:
- llm-service.ts: 添加 suggestions 提示模板 + llmSuggest() 辅助函数
- streamStore.ts: SSE 流式请求 via SaaS relay,response.text() 一次性
  读取避免 Tauri WebView2 ReadableStream 兼容问题,失败降级到关键词
- chatStore.ts: suggestionsLoading 状态镜像
- SuggestionChips.tsx: loading 骨架动画
- ChatArea.tsx: 传递 loading prop
This commit is contained in:
iven
2026-04-23 11:41:50 +08:00
parent 3e78dacef3
commit b56d1a4c34
5 changed files with 113 additions and 34 deletions

View File

@@ -644,6 +644,21 @@ const HARDCODED_PROMPTS: Record<string, { system: string; user: (arg: string) =>
]`,
user: (conversation: string) => `从以下对话中提取值得长期记住的信息:\n\n${conversation}\n\n如果没有值得记忆的内容返回空数组 []。`,
},
suggestions: {
system: `你是对话分析助手。根据最近的对话内容,生成 3 个用户可能想继续探讨的问题。
要求:
- 每个问题必须与对话内容直接相关,具体且有针对性
- 帮助用户深入理解、实际操作或拓展思路
- 每个问题不超过 30 个中文字符
- 不要重复对话中已讨论过的内容
- 使用与用户相同的语言
只输出 JSON 数组,包含恰好 3 个字符串。不要输出任何其他内容。
示例:["如何在生产环境中部署?", "这个方案的成本如何?", "有没有更简单的替代方案?"]`,
user: (context: string) => `以下是对话中最近的消息:\n\n${context}\n\n请生成 3 个后续问题。`,
},
};
// === Prompt Cache (SaaS OTA) ===
@@ -806,6 +821,7 @@ export const LLM_PROMPTS = {
get reflection() { return { system: getSystemPrompt('reflection'), user: getUserPromptTemplate('reflection')! }; },
get compaction() { return { system: getSystemPrompt('compaction'), user: getUserPromptTemplate('compaction')! }; },
get extraction() { return { system: getSystemPrompt('extraction'), user: getUserPromptTemplate('extraction')! }; },
get suggestions() { return { system: getSystemPrompt('suggestions'), user: getUserPromptTemplate('suggestions')! }; },
};
// === Telemetry Integration ===
@@ -876,3 +892,18 @@ export async function llmExtract(
trackLLMCall(llm, response);
return response.content;
}
export async function llmSuggest(
conversationContext: string,
adapter?: LLMServiceAdapter,
): Promise<string> {
const llm = adapter || getLLMAdapter();
const response = await llm.complete([
{ role: 'system', content: LLM_PROMPTS.suggestions.system },
{ role: 'user', content: typeof LLM_PROMPTS.suggestions.user === 'function' ? LLM_PROMPTS.suggestions.user(conversationContext) : LLM_PROMPTS.suggestions.user },
]);
trackLLMCall(llm, response);
return response.content;
}