release(v0.2.0): streaming, MCP protocol, Browser Hand, security enhancements
## Major Features ### Streaming Response System - Implement LlmDriver trait with `stream()` method returning async Stream - Add SSE parsing for Anthropic and OpenAI API streaming - Integrate Tauri event system for frontend streaming (`stream:chunk` events) - Add StreamChunk types: Delta, ToolStart, ToolEnd, Complete, Error ### MCP Protocol Implementation - Add MCP JSON-RPC 2.0 types (mcp_types.rs) - Implement stdio-based MCP transport (mcp_transport.rs) - Support tool discovery, execution, and resource operations ### Browser Hand Implementation - Complete browser automation with Playwright-style actions - Support Navigate, Click, Type, Scrape, Screenshot, Wait actions - Add educational Hands: Whiteboard, Slideshow, Speech, Quiz ### Security Enhancements - Implement command whitelist/blacklist for shell_exec tool - Add SSRF protection with private IP blocking - Create security.toml configuration file ## Test Improvements - Fix test import paths (security-utils, setup) - Fix vi.mock hoisting issues with vi.hoisted() - Update test expectations for validateUrl and sanitizeFilename - Add getUnsupportedLocalGatewayStatus mock ## Documentation Updates - Update architecture documentation - Improve configuration reference - Add quick-start guide updates Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -11,7 +11,8 @@
|
||||
| Node.js | 18.x | `node -v` |
|
||||
| pnpm | 8.x | `pnpm -v` |
|
||||
| Rust | 1.70+ | `rustc --version` |
|
||||
| OpenFang | - | `openfang --version` |
|
||||
|
||||
**重要**: ZCLAW 使用内部 Kernel 架构,**无需**启动外部后端服务。
|
||||
|
||||
---
|
||||
|
||||
@@ -45,21 +46,19 @@ pnpm install
|
||||
cd desktop && pnpm install && cd ..
|
||||
```
|
||||
|
||||
### 2. 启动 OpenFang 后端
|
||||
### 2. 配置 LLM 提供商
|
||||
|
||||
```bash
|
||||
# 方法 A: 使用 CLI
|
||||
openfang start
|
||||
**首次启动后**,在应用的"模型与 API"设置页面配置:
|
||||
|
||||
# 方法 B: 使用 pnpm 脚本
|
||||
pnpm gateway:start
|
||||
```
|
||||
|
||||
验证后端运行:
|
||||
```bash
|
||||
curl http://127.0.0.1:50051/api/health
|
||||
# 应返回: {"status":"ok"}
|
||||
```
|
||||
1. 点击设置图标 ⚙️
|
||||
2. 进入"模型与 API"页面
|
||||
3. 点击"添加自定义模型"
|
||||
4. 填写配置信息:
|
||||
- 服务商:选择 Kimi / Qwen / DeepSeek / Zhipu / OpenAI 等
|
||||
- 模型 ID:如 `kimi-k2-turbo`、`qwen-plus`
|
||||
- API Key:你的 API 密钥
|
||||
- Base URL:(可选)自定义 API 端点
|
||||
5. 点击"设为默认"
|
||||
|
||||
### 3. 启动开发环境
|
||||
|
||||
@@ -67,14 +66,7 @@ curl http://127.0.0.1:50051/api/health
|
||||
# 方法 A: 一键启动(推荐)
|
||||
pnpm start:dev
|
||||
|
||||
# 方法 B: 仅启动桌面端(需要后端已运行)
|
||||
pnpm desktop
|
||||
|
||||
# 方法 C: 分开启动
|
||||
# 终端 1 - 启动 Gateway
|
||||
pnpm dev
|
||||
|
||||
# 终端 2 - 启动桌面端
|
||||
# 方法 B: 仅启动桌面端
|
||||
pnpm desktop
|
||||
```
|
||||
|
||||
@@ -111,17 +103,32 @@ cd desktop && pnpm test:e2e:ui
|
||||
|
||||
| 服务 | 端口 | 说明 |
|
||||
|------|------|------|
|
||||
| OpenFang 后端 | 50051 | API 和 WebSocket 服务 |
|
||||
| Vite 开发服务器 | 1420 | 前端热重载 |
|
||||
| Tauri 窗口 | - | 桌面应用窗口 |
|
||||
|
||||
**注意**: 不再需要端口 50051,所有 Kernel 功能已内置。
|
||||
|
||||
---
|
||||
|
||||
## 支持的 LLM 提供商
|
||||
|
||||
| Provider | Base URL | 环境变量 |
|
||||
|----------|----------|----------|
|
||||
| Kimi Code | `https://api.kimi.com/coding/v1` | UI 配置 |
|
||||
| 百炼/Qwen | `https://dashscope.aliyuncs.com/compatible-mode/v1` | UI 配置 |
|
||||
| DeepSeek | `https://api.deepseek.com/v1` | UI 配置 |
|
||||
| 智谱 GLM | `https://open.bigmodel.cn/api/paas/v4` | UI 配置 |
|
||||
| OpenAI | `https://api.openai.com/v1` | UI 配置 |
|
||||
| Anthropic | `https://api.anthropic.com` | UI 配置 |
|
||||
| Local/Ollama | `http://localhost:11434/v1` | UI 配置 |
|
||||
|
||||
---
|
||||
|
||||
## 常见问题排查
|
||||
|
||||
### Q1: 端口被占用
|
||||
|
||||
**症状**: `Port 1420 is already in use` 或 `Port 50051 is already in use`
|
||||
**症状**: `Port 1420 is already in use`
|
||||
|
||||
**解决**:
|
||||
```powershell
|
||||
@@ -134,38 +141,34 @@ lsof -i :1420
|
||||
kill -9 <PID>
|
||||
```
|
||||
|
||||
### Q2: 后端连接失败
|
||||
### Q2: 请先在"模型与 API"设置页面配置模型
|
||||
|
||||
**症状**: `Network Error` 或 `Connection refused`
|
||||
|
||||
**排查步骤**:
|
||||
```bash
|
||||
# 1. 检查后端是否运行
|
||||
curl http://127.0.0.1:50051/api/health
|
||||
|
||||
# 2. 检查端口监听
|
||||
netstat -ano | findstr "50051"
|
||||
|
||||
# 3. 重启后端
|
||||
openfang restart
|
||||
```
|
||||
|
||||
### Q3: API Key 未配置
|
||||
|
||||
**症状**: `Missing API key: No LLM provider configured`
|
||||
**症状**: 连接时显示"请先在'模型与 API'设置页面配置模型"
|
||||
|
||||
**解决**:
|
||||
```bash
|
||||
# 编辑配置文件
|
||||
# Windows: %USERPROFILE%\.openfang\.env
|
||||
# Linux/macOS: ~/.openfang/.env
|
||||
1. 打开应用设置
|
||||
2. 进入"模型与 API"页面
|
||||
3. 添加自定义模型并配置 API Key
|
||||
4. 设为默认模型
|
||||
5. 重新连接
|
||||
|
||||
# 添加 API Key
|
||||
echo "ZHIPU_API_KEY=your_key" >> ~/.openfang/.env
|
||||
### Q3: LLM 调用失败
|
||||
|
||||
# 重启后端
|
||||
openfang restart
|
||||
```
|
||||
**症状**: `Chat failed: LLM error: API error 401` 或 `404`
|
||||
|
||||
**排查步骤**:
|
||||
1. 检查 API Key 是否正确
|
||||
2. 检查 Base URL 是否正确(特别是 Kimi Code 用户)
|
||||
3. 确认模型 ID 是否正确
|
||||
|
||||
**常见 Provider 配置**:
|
||||
|
||||
| Provider | 模型 ID 示例 | Base URL |
|
||||
|----------|-------------|----------|
|
||||
| Kimi Code | `kimi-k2-turbo` | `https://api.kimi.com/coding/v1` |
|
||||
| Qwen/百炼 | `qwen-plus` | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
|
||||
| DeepSeek | `deepseek-chat` | `https://api.deepseek.com/v1` |
|
||||
| Zhipu | `glm-4-flash` | `https://open.bigmodel.cn/api/paas/v4` |
|
||||
|
||||
### Q4: Tauri 编译失败
|
||||
|
||||
@@ -202,8 +205,8 @@ pnpm install
|
||||
|
||||
启动成功后,验证以下功能:
|
||||
|
||||
- [ ] 后端健康检查通过: `curl http://127.0.0.1:50051/api/health`
|
||||
- [ ] 桌面端窗口正常显示
|
||||
- [ ] 在"模型与 API"页面添加了自定义模型
|
||||
- [ ] 可以发送消息并获得响应
|
||||
- [ ] 可以切换 Agent
|
||||
- [ ] 可以查看设置页面
|
||||
@@ -222,6 +225,29 @@ pnpm start:stop
|
||||
|
||||
---
|
||||
|
||||
## 架构说明
|
||||
|
||||
ZCLAW 使用**内部 Kernel 架构**:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ ZCLAW 桌面应用 │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ ┌─────────────────┐ ┌─────────────────────────────────┐ │
|
||||
│ │ React 前端 │ │ Tauri 后端 (Rust) │ │
|
||||
│ │ ├─ KernelClient│────▶│ └─ zclaw-kernel │ │
|
||||
│ │ └─ Zustand │ │ └─ LLM Drivers │ │
|
||||
│ └─────────────────┘ └─────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**关键点**:
|
||||
- 所有核心能力集成在 Tauri 应用内
|
||||
- 无需启动外部后端进程
|
||||
- 模型配置通过 UI 完成
|
||||
|
||||
---
|
||||
|
||||
## 相关文档
|
||||
|
||||
- [完整开发文档](./DEVELOPMENT.md)
|
||||
@@ -230,4 +256,4 @@ pnpm start:stop
|
||||
|
||||
---
|
||||
|
||||
**最后更新**: 2026-03-21
|
||||
**最后更新**: 2026-03-22
|
||||
|
||||
Reference in New Issue
Block a user