autogpt-agents

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AutoGPT - Autonomous AI Agent Platform

AutoGPT - 自主AI Agent平台

Comprehensive platform for building, deploying, and managing continuous AI agents through a visual interface or development toolkit.
通过可视化界面或开发工具包构建、部署和管理持续运行AI Agent的综合平台。

When to use AutoGPT

何时使用AutoGPT

Use AutoGPT when:
  • Building autonomous agents that run continuously
  • Creating visual workflow-based AI agents
  • Deploying agents with external triggers (webhooks, schedules)
  • Building complex multi-step automation pipelines
  • Need a no-code/low-code agent builder
Key features:
  • Visual Agent Builder: Drag-and-drop node-based workflow editor
  • Continuous Execution: Agents run persistently with triggers
  • Marketplace: Pre-built agents and blocks to share/reuse
  • Block System: Modular components for LLM, tools, integrations
  • Forge Toolkit: Developer tools for custom agent creation
  • Benchmark System: Standardized agent performance testing
Use alternatives instead:
  • LangChain/LlamaIndex: If you need more control over agent logic
  • CrewAI: For role-based multi-agent collaboration
  • OpenAI Assistants: For simple hosted agent deployments
  • Semantic Kernel: For Microsoft ecosystem integration
在以下场景使用AutoGPT:
  • 构建持续运行的自主Agent
  • 创建基于可视化工作流的AI Agent
  • 部署带有外部触发器(Webhook、定时任务)的Agent
  • 构建复杂的多步骤自动化流水线
  • 需要无代码/低代码的Agent构建器
核心功能:
  • 可视化Agent构建器:基于节点的拖拽式工作流编辑器
  • 持续执行:Agent可通过触发器持久运行
  • 市场:可共享/复用的预构建Agent和组件块
  • 组件块系统:用于LLM、工具、集成的模块化组件
  • Forge工具包:用于自定义Agent创建的开发者工具
  • 基准测试系统:标准化的Agent性能测试
可选择替代方案的场景:
  • LangChain/LlamaIndex:如果需要对Agent逻辑有更多控制
  • CrewAI:用于基于角色的多Agent协作
  • OpenAI Assistants:用于简单的托管式Agent部署
  • Semantic Kernel:用于微软生态系统集成

Quick start

快速开始

Installation (Docker)

安装(Docker方式)

bash
undefined
bash
undefined

Clone repository

Clone repository

git clone https://github.com/Significant-Gravitas/AutoGPT.git cd AutoGPT/autogpt_platform
git clone https://github.com/Significant-Gravitas/AutoGPT.git cd AutoGPT/autogpt_platform

Copy environment file

Copy environment file

cp .env.example .env
cp .env.example .env

Start backend services

Start backend services

docker compose up -d --build
docker compose up -d --build

Start frontend (in separate terminal)

Start frontend (in separate terminal)

cd frontend cp .env.example .env npm install npm run dev
undefined
cd frontend cp .env.example .env npm install npm run dev
undefined

Access the platform

访问平台

Architecture overview

架构概述

AutoGPT has two main systems:
AutoGPT包含两个主要系统:

AutoGPT Platform (Production)

AutoGPT平台(生产环境)

  • Visual agent builder with React frontend
  • FastAPI backend with execution engine
  • PostgreSQL + Redis + RabbitMQ infrastructure
  • 基于React的可视化Agent构建器前端
  • 带有执行引擎的FastAPI后端
  • PostgreSQL + Redis + RabbitMQ基础设施

AutoGPT Classic (Development)

AutoGPT经典版(开发环境)

  • Forge: Agent development toolkit
  • Benchmark: Performance testing framework
  • CLI: Command-line interface for development
  • Forge:Agent开发工具包
  • Benchmark:性能测试框架
  • CLI:用于开发的命令行界面

Core concepts

核心概念

Graphs and nodes

图与节点

Agents are represented as graphs containing nodes connected by links:
Graph (Agent)
  ├── Node (Input)
  │   └── Block (AgentInputBlock)
  ├── Node (Process)
  │   └── Block (LLMBlock)
  ├── Node (Decision)
  │   └── Block (SmartDecisionMaker)
  └── Node (Output)
      └── Block (AgentOutputBlock)
Agent以的形式呈现,包含由链接连接的节点
Graph (Agent)
  ├── Node (Input)
  │   └── Block (AgentInputBlock)
  ├── Node (Process)
  │   └── Block (LLMBlock)
  ├── Node (Decision)
  │   └── Block (SmartDecisionMaker)
  └── Node (Output)
      └── Block (AgentOutputBlock)

Blocks

组件块

Blocks are reusable functional components:
Block TypePurpose
INPUT
Agent entry points
OUTPUT
Agent outputs
AI
LLM calls, text generation
WEBHOOK
External triggers
STANDARD
General operations
AGENT
Nested agent execution
组件块是可复用的功能组件:
组件块类型用途
INPUT
Agent入口点
OUTPUT
Agent输出端
AI
LLM调用、文本生成
WEBHOOK
外部触发器
STANDARD
通用操作
AGENT
嵌套Agent执行

Execution flow

执行流程

User/Trigger → Graph Execution → Node Execution → Block.execute()
     ↓              ↓                 ↓
  Inputs      Queue System      Output Yields
用户/触发器 → 图执行 → 节点执行 → Block.execute()
     ↓              ↓                 ↓
  输入值      队列系统      输出结果

Building agents

构建Agent

Using the visual builder

使用可视化构建器

  1. Open Agent Builder at http://localhost:3000
  2. Add blocks from the BlocksControl panel
  3. Connect nodes by dragging between handles
  4. Configure inputs in each node
  5. Run agent using PrimaryActionBar
  1. 打开位于http://localhost:3000的**Agent构建器**
  2. 从BlocksControl面板添加组件块
  3. 通过拖拽手柄连接节点
  4. 在每个节点中配置输入
  5. 使用PrimaryActionBar运行Agent

Available blocks

可用组件块

AI Blocks:
  • AITextGeneratorBlock
    - Generate text with LLMs
  • AIConversationBlock
    - Multi-turn conversations
  • SmartDecisionMakerBlock
    - Conditional logic
Integration Blocks:
  • GitHub, Google, Discord, Notion connectors
  • Webhook triggers and handlers
  • HTTP request blocks
Control Blocks:
  • Input/Output blocks
  • Branching and decision nodes
  • Loop and iteration blocks
AI组件块:
  • AITextGeneratorBlock
    - 利用LLM生成文本
  • AIConversationBlock
    - 多轮对话
  • SmartDecisionMakerBlock
    - 条件逻辑
集成组件块:
  • GitHub、Google、Discord、Notion连接器
  • Webhook触发器和处理器
  • HTTP请求组件块
控制组件块:
  • 输入/输出组件块
  • 分支和决策节点
  • 循环和迭代组件块

Agent execution

Agent执行

Trigger types

触发器类型

Manual execution:
http
POST /api/v1/graphs/{graph_id}/execute
Content-Type: application/json

{
  "inputs": {
    "input_name": "value"
  }
}
Webhook trigger:
http
POST /api/v1/webhooks/{webhook_id}
Content-Type: application/json

{
  "data": "webhook payload"
}
Scheduled execution:
json
{
  "schedule": "0 */2 * * *",
  "graph_id": "graph-uuid",
  "inputs": {}
}
手动执行:
http
POST /api/v1/graphs/{graph_id}/execute
Content-Type: application/json

{
  "inputs": {
    "input_name": "value"
  }
}
Webhook触发:
http
POST /api/v1/webhooks/{webhook_id}
Content-Type: application/json

{
  "data": "webhook payload"
}
定时执行:
json
{
  "schedule": "0 */2 * * *",
  "graph_id": "graph-uuid",
  "inputs": {}
}

Monitoring execution

监控执行

WebSocket updates:
javascript
const ws = new WebSocket('ws://localhost:8001/ws');

ws.onmessage = (event) => {
  const update = JSON.parse(event.data);
  console.log(`Node ${update.node_id}: ${update.status}`);
};
REST API polling:
http
GET /api/v1/executions/{execution_id}
WebSocket更新:
javascript
const ws = new WebSocket('ws://localhost:8001/ws');

ws.onmessage = (event) => {
  const update = JSON.parse(event.data);
  console.log(`Node ${update.node_id}: ${update.status}`);
};
REST API轮询:
http
GET /api/v1/executions/{execution_id}

Using Forge (Development)

使用Forge(开发环境)

Create custom agent

创建自定义Agent

bash
undefined
bash
undefined

Setup forge environment

Setup forge environment

cd classic ./run setup
cd classic ./run setup

Create new agent from template

Create new agent from template

./run forge create my-agent
./run forge create my-agent

Start agent server

Start agent server

./run forge start my-agent
undefined
./run forge start my-agent
undefined

Agent structure

Agent结构

my-agent/
├── agent.py          # Main agent logic
├── abilities/        # Custom abilities
│   ├── __init__.py
│   └── custom.py
├── prompts/          # Prompt templates
└── config.yaml       # Agent configuration
my-agent/
├── agent.py          # 主Agent逻辑
├── abilities/        # 自定义能力
│   ├── __init__.py
│   └── custom.py
├── prompts/          # 提示词模板
└── config.yaml       # Agent配置

Implement custom ability

实现自定义能力

python
from forge import Ability, ability

@ability(
    name="custom_search",
    description="Search for information",
    parameters={
        "query": {"type": "string", "description": "Search query"}
    }
)
def custom_search(query: str) -> str:
    """Custom search ability."""
    # Implement search logic
    result = perform_search(query)
    return result
python
from forge import Ability, ability

@ability(
    name="custom_search",
    description="Search for information",
    parameters={
        "query": {"type": "string", "description": "Search query"}
    }
)
def custom_search(query: str) -> str:
    """Custom search ability."""
    # 实现搜索逻辑
    result = perform_search(query)
    return result

Benchmarking agents

Agent基准测试

Run benchmarks

运行基准测试

bash
undefined
bash
undefined

Run all benchmarks

运行所有基准测试

./run benchmark
./run benchmark

Run specific category

运行特定分类

./run benchmark --category coding
./run benchmark --category coding

Run with specific agent

使用特定Agent运行

./run benchmark --agent my-agent
undefined
./run benchmark --agent my-agent
undefined

Benchmark categories

基准测试分类

  • Coding: Code generation and debugging
  • Retrieval: Information finding
  • Web: Web browsing and interaction
  • Writing: Text generation tasks
  • Coding:代码生成与调试
  • Retrieval:信息查找
  • Web:网页浏览与交互
  • Writing:文本生成任务

VCR cassettes

VCR录像带

Benchmarks use recorded HTTP responses for reproducibility:
bash
undefined
基准测试使用录制的HTTP响应以确保可复现性:
bash
undefined

Record new cassettes

录制新的录像带

./run benchmark --record
./run benchmark --record

Run with existing cassettes

使用现有录像带运行

./run benchmark --playback
undefined
./run benchmark --playback
undefined

Integrations

集成

Adding credentials

添加凭证

  1. Navigate to Profile > Integrations
  2. Select provider (OpenAI, GitHub, Google, etc.)
  3. Enter API keys or authorize OAuth
  4. Credentials are encrypted and stored securely
  1. 导航至个人资料 > 集成
  2. 选择提供商(OpenAI、GitHub、Google等)
  3. 输入API密钥或授权OAuth
  4. 凭证将被加密并安全存储

Using credentials in blocks

在组件块中使用凭证

Blocks automatically access user credentials:
python
class MyLLMBlock(Block):
    def execute(self, inputs):
        # Credentials are injected by the system
        credentials = self.get_credentials("openai")
        client = OpenAI(api_key=credentials.api_key)
        # ...
组件块会自动访问用户凭证:
python
class MyLLMBlock(Block):
    def execute(self, inputs):
        # 系统会注入凭证
        credentials = self.get_credentials("openai")
        client = OpenAI(api_key=credentials.api_key)
        # ...

Supported providers

支持的提供商

ProviderAuth TypeUse Cases
OpenAIAPI KeyLLM, embeddings
AnthropicAPI KeyClaude models
GitHubOAuthCode, repos
GoogleOAuthDrive, Gmail, Calendar
DiscordBot TokenMessaging
NotionOAuthDocuments
提供商认证类型使用场景
OpenAIAPI密钥LLM、嵌入向量
AnthropicAPI密钥Claude模型
GitHubOAuth代码、仓库
GoogleOAuth云端硬盘、Gmail、日历
Discord机器人令牌消息通讯
NotionOAuth文档

Deployment

部署

Docker production setup

Docker生产环境配置

yaml
undefined
yaml
undefined

docker-compose.prod.yml

docker-compose.prod.yml

services: rest_server: image: autogpt/platform-backend environment: - DATABASE_URL=postgresql://... - REDIS_URL=redis://redis:6379 ports: - "8006:8006"
executor: image: autogpt/platform-backend command: poetry run executor
frontend: image: autogpt/platform-frontend ports: - "3000:3000"
undefined
services: rest_server: image: autogpt/platform-backend environment: - DATABASE_URL=postgresql://... - REDIS_URL=redis://redis:6379 ports: - "8006:8006"
executor: image: autogpt/platform-backend command: poetry run executor
frontend: image: autogpt/platform-frontend ports: - "3000:3000"
undefined

Environment variables

环境变量

VariablePurpose
DATABASE_URL
PostgreSQL connection
REDIS_URL
Redis connection
RABBITMQ_URL
RabbitMQ connection
ENCRYPTION_KEY
Credential encryption
SUPABASE_URL
Authentication
变量用途
DATABASE_URL
PostgreSQL连接地址
REDIS_URL
Redis连接地址
RABBITMQ_URL
RabbitMQ连接地址
ENCRYPTION_KEY
凭证加密密钥
SUPABASE_URL
认证服务地址

Generate encryption key

生成加密密钥

bash
cd autogpt_platform/backend
poetry run cli gen-encrypt-key
bash
cd autogpt_platform/backend
poetry run cli gen-encrypt-key

Best practices

最佳实践

  1. Start simple: Begin with 3-5 node agents
  2. Test incrementally: Run and test after each change
  3. Use webhooks: External triggers for event-driven agents
  4. Monitor costs: Track LLM API usage via credits system
  5. Version agents: Save working versions before changes
  6. Benchmark: Use agbenchmark to validate agent quality
  1. 从简开始:先构建包含3-5个节点的Agent
  2. 增量测试:每次修改后运行并测试
  3. 使用Webhook:通过外部触发器实现事件驱动型Agent
  4. 监控成本:通过额度系统跟踪LLM API使用情况
  5. 版本化Agent:修改前保存可用版本
  6. 基准测试:使用agbenchmark验证Agent质量

Common issues

常见问题

Services not starting:
bash
undefined
服务无法启动:
bash
undefined

Check container status

检查容器状态

docker compose ps
docker compose ps

View logs

查看日志

docker compose logs rest_server
docker compose logs rest_server

Restart services

重启服务

docker compose restart

**Database connection issues:**
```bash
docker compose restart

**数据库连接问题:**
```bash

Run migrations

运行迁移

cd backend poetry run prisma migrate deploy

**Agent execution stuck:**
```bash
cd backend poetry run prisma migrate deploy

**Agent执行停滞:**
```bash

Check RabbitMQ queue

检查RabbitMQ队列

Clear stuck executions

清除停滞的执行任务

docker compose restart executor
undefined
docker compose restart executor
undefined

References

参考资料

  • Advanced Usage - Custom blocks, deployment, scaling
  • Troubleshooting - Common issues, debugging
  • 高级用法 - 自定义组件块、部署、扩容
  • 故障排除 - 常见问题、调试方法

Resources

资源