Loading...
Loading...
Framework for collective skill evolution in multi-user LLM agent ecosystems — automatically distills session experience into reusable SKILL.md files and shares them across agent clusters.
npx skill4agent add aradotso/trending-skills skillclaw-skill-evolutionSkill by ara.so — Daily 2026 Skills collection.
SKILL.mdUser → OpenClaw Agent → SkillClaw Client Proxy → Upstream LLM API
↓ records sessions
Shared Storage (OSS/S3/local)
↑ reads sessions, writes skills
Evolve Server (workflow or agent)/v1/chat/completions/v1/messagesevolve_serveragent_evolve_servergit clone <repo-url> SkillClaw && cd SkillClaw
bash scripts/install_skillclaw.sh
source .venv/bin/activatebash scripts/install_skillclaw_server.sh
source .venv-server/bin/activate
# Required only for the agent evolve server
npm install -g openclaw# From example_env.sh
export OPENAI_BASE_URL="https://your-api-gateway/v1"
export OPENAI_API_KEY="$OPENAI_API_KEY"
# For OSS storage backend
export EVOLVE_STORAGE_ENDPOINT="$OSS_ENDPOINT"
export EVOLVE_STORAGE_BUCKET="$OSS_BUCKET"
export OSS_ACCESS_KEY_ID="$OSS_ACCESS_KEY_ID"
export OSS_ACCESS_KEY_SECRET="$OSS_ACCESS_KEY_SECRET"~/.skillclaw/config.yamlskillclaw config show
skillclaw config <key> <value>skillclaw setup # Initialize config and directories
skillclaw start # Start the local proxy server
skillclaw stop # Stop the proxy server
skillclaw status # Show proxy status and config summary
skillclaw config show # Dump full resolved configskillclaw skills pull # Download shared skills from cloud storage
skillclaw skills push # Upload local skills to cloud storage
skillclaw skills sync # Bidirectional sync (pull + push)
skillclaw skills list-remote # Browse skills available in shared storageskillclaw benchmark --help # List all benchmark subcommandsskillclaw-evolve-server \
--port 8787 \
--interval 300 \
--storage-backend oss \
--oss-endpoint "$EVOLVE_STORAGE_ENDPOINT" \
--oss-bucket "$EVOLVE_STORAGE_BUCKET" \
--group-id my-groupskillclaw-agent-evolve-server \
--port 8787 \
--interval 300 \
--no-fresh \
--storage-backend oss \
--oss-endpoint "$EVOLVE_STORAGE_ENDPOINT" \
--oss-bucket "$EVOLVE_STORAGE_BUCKET" \
--group-id my-group--no-freshskillclaw-evolve-server \
--port 8787 \
--interval 60 \
--storage-backend local \
--local-storage-path ./skill_storage \
--group-id dev-group| Option | Description | Default |
|---|---|---|
| Server port | |
| Seconds between evolution cycles | |
| | |
| Identifier for your agent cluster | required |
| Resume from existing skills | flag |
| OSS endpoint URL | env var |
| OSS bucket name | env var |
---
name: my-skill-name
description: What this skill does
version: 1.0.0
tags: [web, scraping]
---
# Skill Name
## When to Use
...
## Instructions
Step-by-step instructions the agent follows.
## Examples
\`\`\`python
# working code example
\`\`\`python scripts/run_wildclawbench_iterative_evolve_agent.py \
--group-id wildclawbench-test \
--storage-backend local \
--local-storage-path ./wb_storage \
--num-iterations 3from skillclaw.skill_manager import SkillManager
from skillclaw.skill_hub import SkillHub
# Initialize with local backend
manager = SkillManager(storage_backend="local", local_path="./skills")
# Pull skills from shared storage
manager.pull()
# List available skills
skills = manager.list_local()
for skill in skills:
print(f"{skill.name}: {skill.description}")
# Push a new skill
manager.push("path/to/SKILL.md")from skillclaw.launcher import SkillClawLauncher
from skillclaw.config import SkillClawConfig
config = SkillClawConfig(
upstream_base_url="https://api.openai.com/v1",
upstream_api_key="$OPENAI_API_KEY", # loaded from env at runtime
proxy_port=8080,
storage_backend="local",
local_storage_path="./skillclaw_data",
group_id="my-agents",
)
launcher = SkillClawLauncher(config)
launcher.start()
# Agents now point to http://localhost:8080/v1import httpx
# Trigger an immediate evolution cycle
response = httpx.post("http://localhost:8787/evolve")
print(response.json()) # {"status": "ok", "skills_evolved": 3}
# Check server status
status = httpx.get("http://localhost:8787/status")
print(status.json()).env.example# evolve_server/.env.example
OPENAI_BASE_URL="https://your-api-gateway/v1"
OPENAI_API_KEY="$OPENAI_API_KEY"
STORAGE_BACKEND=oss
OSS_ENDPOINT="$EVOLVE_STORAGE_ENDPOINT"
OSS_BUCKET="$EVOLVE_STORAGE_BUCKET"
OSS_ACCESS_KEY_ID="$OSS_ACCESS_KEY_ID"
OSS_ACCESS_KEY_SECRET="$OSS_ACCESS_KEY_SECRET"
GROUP_ID=production-cluster
EVOLVE_INTERVAL=300User A → Agent (port 8080) ─┐
User B → Agent (port 8081) ─┼──→ Shared OSS Bucket ←── Evolve Server
User C → Agent (port 8082) ─┘ ↑
Skills sync'd
back to all agents# Each user's machine runs:
skillclaw start --group-id production-cluster --port 8080
# One central server runs:
skillclaw-evolve-server \
--storage-backend oss \
--oss-bucket "$SHARED_BUCKET" \
--group-id production-cluster \
--interval 300skillclaw status # Check if already running
skillclaw stop && skillclaw start # Restart
skillclaw config show # Verify OPENAI_BASE_URL is setskillclaw skills list-remote # Verify storage connection
skillclaw config show # Check storage backend config
# Confirm env vars are exported: echo $OSS_ACCESS_KEY_ID# Check server logs for cycle output
# Verify --group-id matches the client proxy group-id
# Try --storage-backend local for debugging
skillclaw-evolve-server --storage-backend local --local-storage-path ./debug_storage --group-id debugwhich openclaw # Must be in PATH
npm install -g openclaw # Install if missing
# Verify OPENAI_BASE_URL and OPENAI_API_KEY are set for the agent's LLMskillclaw stop
lsof -i :8787 | grep LISTEN # Find conflicting process
skillclaw-evolve-server --port 8788 ...SkillClaw/
├── skillclaw/ # Client proxy, CLI, config
│ ├── cli.py # All `skillclaw` CLI commands
│ ├── api_server.py # Proxy server implementation
│ ├── launcher.py # Process management
│ ├── skill_manager.py # Local skill CRUD
│ ├── skill_hub.py # Cloud sync logic
│ └── experiments/ # Benchmark runners
├── evolve_server/ # Workflow evolve server
│ ├── summarizer.py # Stage 1: session → summary
│ ├── aggregation.py # Stage 2: summaries → patterns
│ ├── execution.py # Stage 3: patterns → SKILL.md
│ └── skill_registry.py # Skill dedup and versioning
├── agent_evolve_server/ # OpenClaw-based evolve server
│ ├── workspace.py # Session/skill file workspace
│ ├── openclaw_runner.py # Agent execution harness
│ └── EVOLVE_AGENTS.md # Agent prompt and tool config
└── scripts/
├── install_skillclaw.sh
├── install_skillclaw_server.sh
└── run_wildclawbench_iterative_evolve_agent.py