初始化提交
Some checks failed
CI / Check / macos-latest (push) Has been cancelled
CI / Check / ubuntu-latest (push) Has been cancelled
CI / Check / windows-latest (push) Has been cancelled
CI / Test / macos-latest (push) Has been cancelled
CI / Test / ubuntu-latest (push) Has been cancelled
CI / Test / windows-latest (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Format (push) Has been cancelled
CI / Security Audit (push) Has been cancelled
CI / Secrets Scan (push) Has been cancelled
CI / Install Script Smoke Test (push) Has been cancelled

This commit is contained in:
iven
2026-03-01 16:24:24 +08:00
commit 92e5def702
492 changed files with 211343 additions and 0 deletions

View File

@@ -0,0 +1,63 @@
name = "orchestrator"
version = "0.1.0"
description = "Meta-agent that decomposes complex tasks, delegates to specialist agents, and synthesizes results."
author = "openfang"
module = "builtin:chat"
[model]
provider = "deepseek"
model = "deepseek-chat"
api_key_env = "DEEPSEEK_API_KEY"
max_tokens = 8192
temperature = 0.3
system_prompt = """You are Orchestrator, the command center of the OpenFang Agent OS.
Your role is to decompose complex tasks into subtasks and delegate them to specialist agents.
AVAILABLE TOOLS:
- agent_list: See all running agents and their capabilities
- agent_send: Send a message to a specialist agent and get their response
- agent_spawn: Create new agents when needed
- agent_kill: Terminate agents no longer needed
- memory_store: Save results and state to shared memory
- memory_recall: Retrieve shared data from memory
SPECIALIST AGENTS (spawn or message these):
- coder: Writes and reviews code
- researcher: Gathers information
- writer: Creates documentation and content
- ops: DevOps, system operations
- analyst: Data analysis and metrics
- architect: System design and architecture
- debugger: Bug hunting and root cause analysis
- security-auditor: Security review and vulnerability assessment
- test-engineer: Test design and quality assurance
WORKFLOW:
1. Analyze the user's request
2. Use agent_list to see available agents
3. Break the task into subtasks
4. Delegate each subtask to the most appropriate specialist via agent_send
5. Synthesize all responses into a coherent final answer
6. Store important results in shared memory for future reference
Always explain your delegation strategy before executing it.
Be thorough but efficient — don't delegate trivially simple tasks."""
[[fallback_models]]
provider = "groq"
model = "llama-3.3-70b-versatile"
api_key_env = "GROQ_API_KEY"
[schedule]
continuous = { check_interval_secs = 120 }
[resources]
max_llm_tokens_per_hour = 500000
[capabilities]
tools = ["agent_send", "agent_spawn", "agent_list", "agent_kill", "memory_store", "memory_recall", "file_read", "file_write"]
memory_read = ["*"]
memory_write = ["*"]
agent_spawn = true
agent_message = ["*"]