11 KiB
Contributing to OpenFang
Thank you for your interest in contributing to OpenFang. This guide covers everything you need to get started, from setting up your development environment to submitting pull requests.
Table of Contents
- Development Environment
- Building and Testing
- Code Style
- Architecture Overview
- How to Add a New Agent Template
- How to Add a New Channel Adapter
- How to Add a New Tool
- Pull Request Process
- Code of Conduct
Development Environment
Prerequisites
- Rust 1.75+ (install via rustup)
- Git
- Python 3.8+ (optional, for Python runtime and skills)
- A supported LLM API key (Anthropic, OpenAI, Groq, etc.) for end-to-end testing
Clone and Build
git clone https://github.com/RightNow-AI/openfang.git
cd openfang
cargo build
The first build takes a few minutes because it compiles SQLite (bundled) and Wasmtime. Subsequent builds are incremental.
Environment Variables
For running integration tests that hit a real LLM, set at least one provider key:
export GROQ_API_KEY=gsk_... # Recommended for fast, free-tier testing
export ANTHROPIC_API_KEY=sk-ant-... # For Anthropic-specific tests
Tests that require a real LLM key will skip gracefully if the env var is absent.
Building and Testing
Build the Entire Workspace
cargo build --workspace
Run All Tests
cargo test --workspace
The test suite is currently 1,744+ tests. All must pass before merging.
Run Tests for a Single Crate
cargo test -p openfang-kernel
cargo test -p openfang-runtime
cargo test -p openfang-memory
Check for Clippy Warnings
cargo clippy --workspace --all-targets -- -D warnings
The CI pipeline enforces zero clippy warnings.
Format Code
cargo fmt --all
Always run cargo fmt before committing. CI will reject unformatted code.
Run the Doctor Check
After building, verify your local setup:
cargo run -- doctor
Code Style
- Formatting: Use
rustfmtwith default settings. Runcargo fmt --allbefore every commit. - Linting:
cargo clippy --workspace -- -D warningsmust pass with zero warnings. - Documentation: All public types and functions must have doc comments (
///). - Error Handling: Use
thiserrorfor error types. Avoidunwrap()in library code; prefer?propagation. - Naming:
- Types:
PascalCase(e.g.,OpenFangKernel,AgentManifest) - Functions/methods:
snake_case - Constants:
SCREAMING_SNAKE_CASE - Crate names:
openfang-{name}(kebab-case)
- Types:
- Dependencies: Workspace dependencies are declared in the root
Cargo.toml. Prefer reusing workspace deps over adding new ones. If you need a new dependency, justify it in the PR. - Testing: Every new feature must include tests. Use
tempfile::TempDirfor filesystem isolation and random port binding for network tests. - Serde: All config structs use
#[serde(default)]for forward compatibility with partial TOML.
Architecture Overview
OpenFang is organized as a Cargo workspace with 14 crates:
| Crate | Role |
|---|---|
openfang-types |
Shared type definitions, taint tracking, manifest signing (Ed25519), model catalog, MCP/A2A config types |
openfang-memory |
SQLite-backed memory substrate with vector embeddings, usage tracking, canonical sessions, JSONL mirroring |
openfang-runtime |
Agent loop, 3 LLM drivers (Anthropic/Gemini/OpenAI-compat), 38 built-in tools, WASM sandbox, MCP client/server, A2A protocol |
openfang-hands |
Hands system (curated autonomous capability packages), 7 bundled hands |
openfang-extensions |
Integration registry (25 bundled MCP templates), AES-256-GCM credential vault, OAuth2 PKCE |
openfang-kernel |
Assembles all subsystems: workflow engine, RBAC auth, heartbeat monitor, cron scheduler, config hot-reload |
openfang-api |
REST/WS/SSE API (Axum 0.8), 76 endpoints, 14-page SPA dashboard, OpenAI-compatible /v1/chat/completions |
openfang-channels |
40 channel adapters (Telegram, Discord, Slack, WhatsApp, and 36 more), formatter, rate limiter |
openfang-wire |
OFP (OpenFang Protocol): TCP P2P networking with HMAC-SHA256 mutual authentication |
openfang-cli |
Clap CLI with daemon auto-detect (HTTP mode vs. in-process fallback), MCP server |
openfang-migrate |
Migration engine for importing from OpenClaw (and future frameworks) |
openfang-skills |
Skill system: 60 bundled skills, FangHub marketplace, OpenClaw compatibility, prompt injection scanning |
openfang-desktop |
Tauri 2.0 native desktop app (WebView + system tray + single-instance + notifications) |
xtask |
Build automation tasks |
Key Architectural Patterns
KernelHandletrait: Defined inopenfang-runtime, implemented onOpenFangKernelinopenfang-kernel. This avoids circular crate dependencies while enabling inter-agent tools.- Shared memory: A fixed UUID (
AgentId(Uuid::from_bytes([0..0, 0x01]))) provides a cross-agent KV namespace. - Daemon detection: The CLI checks
~/.openfang/daemon.jsonand pings the health endpoint. If a daemon is running, commands use HTTP; otherwise, they boot an in-process kernel. - Capability-based security: Every agent operation is checked against the agent's granted capabilities before execution.
How to Add a New Agent Template
Agent templates live in the agents/ directory. Each template is a folder containing an agent.toml manifest.
Steps
- Create a new directory under
agents/:
agents/my-agent/agent.toml
- Write the manifest:
name = "my-agent"
version = "0.1.0"
description = "A brief description of what this agent does."
author = "openfang"
module = "builtin:chat"
tags = ["category"]
[model]
provider = "groq"
model = "llama-3.3-70b-versatile"
[resources]
max_llm_tokens_per_hour = 100000
[capabilities]
tools = ["file_read", "file_list", "web_fetch"]
memory_read = ["*"]
memory_write = ["self.*"]
agent_spawn = false
- Include a system prompt if needed by adding it to the
[model]section:
[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
system_prompt = """
You are a specialized agent that...
"""
- Test by spawning:
openfang agent spawn agents/my-agent/agent.toml
- Submit a PR with the new template.
How to Add a New Channel Adapter
Channel adapters live in crates/openfang-channels/src/. Each adapter implements the ChannelAdapter trait.
Steps
-
Create a new file:
crates/openfang-channels/src/myplatform.rs -
Implement the
ChannelAdaptertrait (defined intypes.rs):
use crate::types::{ChannelAdapter, ChannelMessage, ChannelType};
use async_trait::async_trait;
pub struct MyPlatformAdapter {
// token, client, config fields
}
#[async_trait]
impl ChannelAdapter for MyPlatformAdapter {
fn channel_type(&self) -> ChannelType {
ChannelType::Custom("myplatform".to_string())
}
async fn start(&mut self) -> Result<(), Box<dyn std::error::Error>> {
// Start polling/listening for messages
Ok(())
}
async fn send(&self, channel_id: &str, content: &str) -> Result<(), Box<dyn std::error::Error>> {
// Send a message back to the platform
Ok(())
}
async fn stop(&mut self) {
// Clean shutdown
}
}
- Register the module in
crates/openfang-channels/src/lib.rs:
pub mod myplatform;
-
Wire it up in the channel bridge (
crates/openfang-api/src/channel_bridge.rs) so the daemon starts it alongside other adapters. -
Add configuration support in
openfang-typesconfig structs (add a[channels.myplatform]section). -
Add CLI setup wizard instructions in
crates/openfang-cli/src/main.rsundercmd_channel_setup. -
Write tests and submit a PR.
How to Add a New Tool
Built-in tools are defined in crates/openfang-runtime/src/tool_runner.rs.
Steps
- Add the tool implementation function:
async fn tool_my_tool(input: &serde_json::Value) -> Result<String, String> {
let param = input["param"]
.as_str()
.ok_or("Missing 'param' field")?;
// Tool logic here
Ok(format!("Result: {param}"))
}
- Register it in the
execute_toolmatch block:
"my_tool" => tool_my_tool(input).await,
- Add the tool definition to
builtin_tool_definitions():
ToolDefinition {
name: "my_tool".to_string(),
description: "Description shown to the LLM.".to_string(),
input_schema: serde_json::json!({
"type": "object",
"properties": {
"param": {
"type": "string",
"description": "The parameter description"
}
},
"required": ["param"]
}),
},
- Agents that need the tool must list it in their manifest:
[capabilities]
tools = ["my_tool"]
-
Write tests for the tool function.
-
If the tool requires kernel access (e.g., inter-agent communication), accept
Option<&Arc<dyn KernelHandle>>and handle theNonecase gracefully.
Pull Request Process
-
Fork and branch: Create a feature branch from
main. Use descriptive names likefeat/add-matrix-adapterorfix/session-restore-crash. -
Make your changes: Follow the code style guidelines above.
-
Test thoroughly:
cargo test --workspacemust pass (all 1,744+ tests).cargo clippy --workspace --all-targets -- -D warningsmust produce zero warnings.cargo fmt --all --checkmust produce no diff.
-
Write a clear PR description: Explain what changed and why. Include before/after examples if applicable.
-
One concern per PR: Keep PRs focused. A single PR should address one feature, one bug fix, or one refactor -- not all three.
-
Review process: At least one maintainer must approve before merge. Address review feedback promptly.
-
CI must pass: All automated checks must be green before merge.
Commit Messages
Use clear, imperative-mood messages:
Add Matrix channel adapter with E2EE support
Fix session restore crash on kernel reboot
Refactor capability manager to use DashMap
Code of Conduct
This project follows the Contributor Covenant Code of Conduct. By participating, you agree to uphold a welcoming, inclusive, and harassment-free environment for everyone.
Please report unacceptable behavior to the maintainers.
Questions?
- Open a GitHub Discussion for questions.
- Open a GitHub Issue for bugs or feature requests.
- Check the docs/ directory for detailed guides on specific topics.