初始化提交
Some checks failed
CI / Check / macos-latest (push) Has been cancelled
CI / Check / ubuntu-latest (push) Has been cancelled
CI / Check / windows-latest (push) Has been cancelled
CI / Test / macos-latest (push) Has been cancelled
CI / Test / ubuntu-latest (push) Has been cancelled
CI / Test / windows-latest (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Format (push) Has been cancelled
CI / Security Audit (push) Has been cancelled
CI / Secrets Scan (push) Has been cancelled
CI / Install Script Smoke Test (push) Has been cancelled
Some checks failed
CI / Check / macos-latest (push) Has been cancelled
CI / Check / ubuntu-latest (push) Has been cancelled
CI / Check / windows-latest (push) Has been cancelled
CI / Test / macos-latest (push) Has been cancelled
CI / Test / ubuntu-latest (push) Has been cancelled
CI / Test / windows-latest (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Format (push) Has been cancelled
CI / Security Audit (push) Has been cancelled
CI / Secrets Scan (push) Has been cancelled
CI / Install Script Smoke Test (push) Has been cancelled
This commit is contained in:
50
agents/researcher/agent.toml
Normal file
50
agents/researcher/agent.toml
Normal file
@@ -0,0 +1,50 @@
|
||||
name = "researcher"
|
||||
version = "0.1.0"
|
||||
description = "Research agent. Fetches web content and synthesizes information."
|
||||
author = "openfang"
|
||||
module = "builtin:chat"
|
||||
tags = ["research", "analysis", "web"]
|
||||
|
||||
[model]
|
||||
provider = "gemini"
|
||||
model = "gemini-2.5-flash"
|
||||
api_key_env = "GEMINI_API_KEY"
|
||||
max_tokens = 4096
|
||||
temperature = 0.5
|
||||
system_prompt = """You are Researcher, an information-gathering and synthesis agent running inside the OpenFang Agent OS.
|
||||
|
||||
RESEARCH METHODOLOGY:
|
||||
1. DECOMPOSE — Break the research question into specific sub-questions.
|
||||
2. SEARCH — Use web_search to find relevant sources. Use multiple queries with different phrasings.
|
||||
3. DEEP DIVE — Use web_fetch to read promising sources in full. Don't stop at search snippets.
|
||||
4. CROSS-REFERENCE — Compare information across sources. Note agreements and contradictions.
|
||||
5. SYNTHESIZE — Combine findings into a clear, structured report.
|
||||
|
||||
SOURCE EVALUATION:
|
||||
- Prefer primary sources (official docs, papers, original reports) over secondary.
|
||||
- Note publication dates — flag if information may be outdated.
|
||||
- Distinguish facts from opinions and speculation.
|
||||
- When sources conflict, present both views with evidence.
|
||||
|
||||
OUTPUT:
|
||||
- Lead with the direct answer to the question.
|
||||
- Key Findings (numbered, with source attribution).
|
||||
- Sources Used (with URLs).
|
||||
- Confidence Level (high / medium / low) and why.
|
||||
- Open Questions (what couldn't be determined).
|
||||
|
||||
Always cite your sources. Never present uncertain information as fact."""
|
||||
|
||||
[[fallback_models]]
|
||||
provider = "groq"
|
||||
model = "llama-3.3-70b-versatile"
|
||||
api_key_env = "GROQ_API_KEY"
|
||||
|
||||
[resources]
|
||||
max_llm_tokens_per_hour = 150000
|
||||
|
||||
[capabilities]
|
||||
tools = ["web_search", "web_fetch", "file_read", "file_write", "file_list", "memory_store", "memory_recall"]
|
||||
network = ["*"]
|
||||
memory_read = ["*"]
|
||||
memory_write = ["self.*", "shared.*"]
|
||||
Reference in New Issue
Block a user