feat(l4): upgrade engines with LLM-powered capabilities (Phase 2)
Phase 2 LLM Engine Upgrades: - ReflectionEngine: Add LLM semantic analysis for pattern detection - ContextCompactor: Add LLM summarization for high-quality compaction - MemoryExtractor: Add LLM importance scoring for memory extraction - Add unified LLM service adapter (OpenAI, Volcengine, Gateway, Mock) - Add MemorySource 'llm-reflection' for LLM-generated memories - Add 13 integration tests for LLM-powered features Config options added: - useLLM: Enable LLM mode for each engine - llmProvider: Preferred LLM provider - llmFallbackToRules: Fallback to rules if LLM fails Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -10,7 +10,7 @@
|
||||
// === Types ===
|
||||
|
||||
export type MemoryType = 'fact' | 'preference' | 'lesson' | 'context' | 'task';
|
||||
export type MemorySource = 'auto' | 'user' | 'reflection';
|
||||
export type MemorySource = 'auto' | 'user' | 'reflection' | 'llm-reflection';
|
||||
|
||||
export interface MemoryEntry {
|
||||
id: string;
|
||||
|
||||
Reference in New Issue
Block a user