feat(l4): upgrade engines with LLM-powered capabilities (Phase 2)

Phase 2 LLM Engine Upgrades:
- ReflectionEngine: Add LLM semantic analysis for pattern detection
- ContextCompactor: Add LLM summarization for high-quality compaction
- MemoryExtractor: Add LLM importance scoring for memory extraction
- Add unified LLM service adapter (OpenAI, Volcengine, Gateway, Mock)
- Add MemorySource 'llm-reflection' for LLM-generated memories
- Add 13 integration tests for LLM-powered features

Config options added:
- useLLM: Enable LLM mode for each engine
- llmProvider: Preferred LLM provider
- llmFallbackToRules: Fallback to rules if LLM fails

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
iven
2026-03-16 10:41:03 +08:00
parent ef3315db69
commit 0b89329e19
5 changed files with 599 additions and 16 deletions

View File

@@ -10,7 +10,7 @@
// === Types ===
export type MemoryType = 'fact' | 'preference' | 'lesson' | 'context' | 'task';
export type MemorySource = 'auto' | 'user' | 'reflection';
export type MemorySource = 'auto' | 'user' | 'reflection' | 'llm-reflection';
export interface MemoryEntry {
id: string;