llms-txt-generator

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

LLM Documentation Generator

LLM 文档生成器

Generate structured, AI-readable documentation following the llms.txt standard with granular files organized by domain.
生成符合llms.txt标准的结构化、AI可读文档,按领域组织细粒度文件。

Output Structure

输出结构

llm-docs/
├── llm.txt                    # Main index (~1-2 KB)
├── llm.version.txt            # Metadata and sync info (~0.3 KB)
└── llm.{domain}.txt           # Domain-specific files (~3-50 KB each)
llm-docs/
├── llm.txt                    # 主索引(约1-2 KB)
├── llm.version.txt            # 元数据与同步信息(约0.3 KB)
└── llm.{domain}.txt           # 领域专属文件(每个约3-50 KB)

Workflow

工作流程

Phase 1: Language Selection

阶段1:语言选择

Ask user preferred language before starting:
¿En qué idioma prefieres la documentación? / What language do you prefer?
- Español
- English
- Bilingual (technical terms in English, explanations in Spanish)
开始前询问用户偏好的语言:
¿En qué idioma prefieres la documentación? / What language do you prefer?
- Español
- English
- Bilingual (technical terms in English, explanations in Spanish)

Phase 2: Project Analysis

阶段2:项目分析

Identify project type and data sources:
IndicatorProject Type
components/
, design tokens, SCSS
Frontend/UI Library
cmd/
, CLI flags, subcommands
CLI Tool
/api/
, OpenAPI, routes
REST/GraphQL API
src/
, exports, package.json
Generic Library
Detect structured data sources:
  • JSON metadata files (component docs, OpenAPI specs)
  • JSDoc/GoDoc comments
  • TypeScript definitions
  • Configuration files (package.json, go.mod)
  • Existing documentation (README, docs/)
识别项目类型与数据源:
指标项目类型
components/
、设计令牌、SCSS
前端/UI库
cmd/
、CLI参数、子命令
CLI工具
/api/
、OpenAPI、路由
REST/GraphQL API
src/
、导出项、package.json
通用库
检测结构化数据源:
  • JSON元数据文件(组件文档、OpenAPI规范)
  • JSDoc/GoDoc注释
  • TypeScript定义
  • 配置文件(package.json、go.mod)
  • 现有文档(README、docs/)

Phase 3: Domain Planning

阶段3:领域规划

Based on project type, plan which
llm.{domain}.txt
files to create:
Frontend/UI: See
references/frontend-example.md
  • tokens, utilities, styles, brands
  • components-atoms, components-molecules, components-organisms
CLI Tools: See
references/cli-example.md
  • commands, core, gateway, deployment, resources, testing, usage
APIs: See
references/api-example.md
  • endpoints, models, auth, errors, examples
Libraries: See
references/library-example.md
  • api, internals, patterns, examples
基于项目类型,规划要创建的
llm.{domain}.txt
文件:
前端/UI:参考
references/frontend-example.md
  • tokens, utilities, styles, brands
  • components-atoms, components-molecules, components-organisms
CLI工具:参考
references/cli-example.md
  • commands, core, gateway, deployment, resources, testing, usage
APIs:参考
references/api-example.md
  • endpoints, models, auth, errors, examples
通用库:参考
references/library-example.md
  • api, internals, patterns, examples

Phase 4: Implementation Decision

阶段4:实现决策

Choose approach based on data availability:
ConditionApproach
Structured data exists (JSON, JSDoc, OpenAPI)Create generator script
Manual documentation neededWrite static markdown files
Mixed sourcesHybrid: script for structured, manual for rest
Generator script benefits:
  • Auto-updates when code changes
  • DRY principle: single source of truth
  • Consistent formatting
  • Add npm/make script:
    generate:llms
根据数据可用性选择实现方式:
条件实现方式
存在结构化数据(JSON、JSDoc、OpenAPI)创建生成器脚本
需要手动编写文档编写静态Markdown文件
混合数据源混合方式:结构化数据用脚本生成,其余手动编写
生成器脚本优势
  • 代码变更时自动更新
  • 遵循DRY原则:单一事实源
  • 格式一致
  • 添加npm/make脚本:
    generate:llms

Phase 5: File Generation

阶段5:文件生成

llm.version.txt (always first)

llm.version.txt(始终优先生成)

markdown
undefined
markdown
undefined

{Project} LLM Documentation

{Project} LLM Documentation

  • Version: {semantic version}
  • Last Updated: {YYYY-MM-DD}
  • Documentation Version: 1.0.0
  • Files: {count} domain files
  • Total Size: ~{X} KB
undefined
  • Version: {semantic version}
  • Last Updated: {YYYY-MM-DD}
  • Documentation Version: 1.0.0
  • Files: {count} domain files
  • Total Size: ~{X} KB
undefined

llm.txt (main index)

llm.txt(主索引)

markdown
undefined
markdown
undefined

{Project} - LLM Documentation

{Project} - LLM Documentation

Project Metadata

Project Metadata

  • Name: {project name}
  • Type: {frontend|cli|api|library}
  • Language: {primary language}
  • Purpose: {one-line description}
  • Name: {project name}
  • Type: {frontend|cli|api|library}
  • Language: {primary language}
  • Purpose: {one-line description}

Quick Reference

Quick Reference

  • Key Modules: {list main areas}
  • Patterns: {architectural patterns used}
  • Dependencies: {key dependencies}
  • Key Modules: {list main areas}
  • Patterns: {architectural patterns used}
  • Dependencies: {key dependencies}

Documentation Structure

Documentation Structure

{Domain 1}

{Domain 1}

llm.{domain1}.txt

llm.{domain1}.txt

  • Focus: {what this file covers}
  • Use when: {scenarios to read this file}
  • Focus: {what this file covers}
  • Use when: {scenarios to read this file}

{Domain 2}

{Domain 2}

...
...

Reading Guide

Reading Guide

  1. Start with
    llm.version.txt
    for metadata
  2. Read
    llm.{primary-domain}.txt
    for core concepts
  3. Reference other files as needed
undefined
  1. 先阅读
    llm.version.txt
    获取元数据
  2. 阅读
    llm.{primary-domain}.txt
    了解核心概念
  3. 根据需要参考其他文件
undefined

llm.{domain}.txt (domain files)

llm.{domain}.txt(领域文件)

Each domain file follows this structure:
markdown
undefined
每个领域文件遵循以下结构:
markdown
undefined

{Domain} - {Project}

{Domain} - {Project}

Overview

Overview

{2-3 sentences explaining this domain}
{2-3句说明该领域的内容}

{Section 1}

{Section 1}

NameTypeDescription
.........
名称类型描述
.........

{Section 2}

{Section 2}

{Subsection}

{Subsection}

{Content with code examples}
{包含代码示例的内容}

Related Files

相关文件

  • llm.{related}.txt
    - {why related}
undefined
  • llm.{related}.txt
    - {关联原因}
undefined

Best Practices

最佳实践

  1. File size: Keep each file under 50 KB for optimal LLM context usage
  2. Cross-references: Link between files with clear "when to read" guidance
  3. Tables: Use markdown tables for properties, tokens, parameters
  4. Code examples: Include practical, copy-pasteable examples
  5. Hierarchy: Use consistent heading levels (H1 for title, H2 for sections, H3 for subsections)
  1. 文件大小:每个文件保持在50 KB以下,以优化LLM上下文使用
  2. 交叉引用:在文件间建立链接,并明确说明“何时阅读”的指南
  3. 表格:使用Markdown表格展示属性、令牌、参数
  4. 代码示例:包含实用的、可直接复制粘贴的示例
  5. 层级结构:使用一致的标题层级(H1为标题,H2为章节,H3为子章节)

Generator Script Pattern

生成器脚本模板

When creating a generator script:
javascript
// Structure
const config = { COMPONENTS_DIR, OUTPUT_DIR, ... };

// Utilities
function readFile(path) { ... }
function writeOutput(filename, content) { ... }

// Extractors (one per data source)
function extractComponents() { ... }
function extractTokens() { ... }

// Generators (one per output file)
function generateIndex() { ... }
function generateVersion() { ... }
function generateDomain() { ... }

// Main
function main() {
  // Extract all data
  // Generate all files
  // Log summary
}

// Export for testing
module.exports = { extractors, generators };

// Run if main
if (require.main === module) main();
Add to package.json:
json
{
  "scripts": {
    "generate:llms": "node build-scripts/create-llms-docs.js"
  }
}
创建生成器脚本时遵循以下模板:
javascript
// 结构
const config = { COMPONENTS_DIR, OUTPUT_DIR, ... };

// 工具函数
function readFile(path) { ... }
function writeOutput(filename, content) { ... }

// 提取器(每个数据源对应一个)
function extractComponents() { ... }
function extractTokens() { ... }

// 生成器(每个输出文件对应一个)
function generateIndex() { ... }
function generateVersion() { ... }
function generateDomain() { ... }

// 主函数
function main() {
  // 提取所有数据
  // 生成所有文件
  // 输出摘要日志
}

// 导出用于测试
module.exports = { extractors, generators };

// 如果是主模块则运行
if (require.main === module) main();
在package.json中添加:
json
{
  "scripts": {
    "generate:llms": "node build-scripts/create-llms-docs.js"
  }
}

Ejemplo Completo de Output

完整输出示例

Proyecto: CLI de Deployment

项目:部署CLI工具

Después de analizar un CLI tool, el skill genera:
llm-docs/llm.version.txt
markdown
undefined
分析完一个CLI工具后,本技能会生成:
llm-docs/llm.version.txt
markdown
undefined

DeployCLI LLM Documentation

DeployCLI LLM Documentation

  • Version: 2.1.0
  • Last Updated: 2025-12-15
  • Documentation Version: 1.0.0
  • Files: 4 domain files
  • Total Size: ~35 KB

**llm-docs/llm.txt**
```markdown
  • Version: 2.1.0
  • Last Updated: 2025-12-15
  • Documentation Version: 1.0.0
  • Files: 4 domain files
  • Total Size: ~35 KB

**llm-docs/llm.txt**
```markdown

DeployCLI - LLM Documentation

DeployCLI - LLM Documentation

Project Metadata

Project Metadata

  • Name: deploy-cli
  • Type: CLI Tool
  • Language: TypeScript
  • Purpose: Deploy applications to multiple cloud providers
  • Name: deploy-cli
  • Type: CLI Tool
  • Language: TypeScript
  • Purpose: Deploy applications to multiple cloud providers

Quick Reference

Quick Reference

  • Key Modules: commands, providers, config
  • Patterns: Command pattern, Provider abstraction
  • Dependencies: commander, chalk, ora
  • Key Modules: commands, providers, config
  • Patterns: Command pattern, Provider abstraction
  • Dependencies: commander, chalk, ora

Documentation Structure

Documentation Structure

Commands

Commands

llm.commands.txt

llm.commands.txt

  • Focus: All CLI commands and subcommands
  • Use when: Need to understand available commands and flags
  • Focus: All CLI commands and subcommands
  • Use when: Need to understand available commands and flags

Providers

Providers

llm.providers.txt

llm.providers.txt

  • Focus: Cloud provider integrations (AWS, GCP, Vercel)
  • Use when: Adding or modifying provider support
  • Focus: Cloud provider integrations (AWS, GCP, Vercel)
  • Use when: Adding or modifying provider support

Configuration

Configuration

llm.config.txt

llm.config.txt

  • Focus: Config file format and options
  • Use when: Understanding how users configure the CLI

**llm-docs/llm.commands.txt**
```markdown
  • Focus: Config file format and options
  • Use when: Understanding how users configure the CLI

**llm-docs/llm.commands.txt**
```markdown

Commands - DeployCLI

Commands - DeployCLI

Overview

Overview

DeployCLI exposes 5 main commands for deployment management.
DeployCLI exposes 5 main commands for deployment management.

Commands

Commands

CommandDescriptionFlags
deploy
Deploy to target provider
--provider
,
--env
,
--dry-run
rollback
Revert to previous deployment
--version
,
--force
status
Check deployment status
--watch
,
--json
config
Manage configuration
--init
,
--validate
logs
Stream deployment logs
--follow
,
--since
CommandDescriptionFlags
deploy
Deploy to target provider
--provider
,
--env
,
--dry-run
rollback
Revert to previous deployment
--version
,
--force
status
Check deployment status
--watch
,
--json
config
Manage configuration
--init
,
--validate
logs
Stream deployment logs
--follow
,
--since

deploy

deploy

Main deployment command.
Main deployment command.

Usage

Usage

```bash deploy-cli deploy --provider aws --env production ```
bash
deploy-cli deploy --provider aws --env production

Flags

Flags

  • --provider, -p
    : Target provider (aws, gcp, vercel)
  • --env, -e
    : Environment (development, staging, production)
  • --dry-run
    : Simulate without deploying
  • --config, -c
    : Path to config file
  • --provider, -p
    : Target provider (aws, gcp, vercel)
  • --env, -e
    : Environment (development, staging, production)
  • --dry-run
    : Simulate without deploying
  • --config, -c
    : Path to config file

Related Files

Related Files

  • llm.providers.txt
    - Provider-specific deployment details
  • llm.config.txt
    - Configuration options for deployments
undefined
  • llm.providers.txt
    - Provider-specific deployment details
  • llm.config.txt
    - Configuration options for deployments
undefined

Uso del Output

输出使用方式

Una vez generados, los archivos pueden ser:
  1. Incluidos en prompts de AI:
    @llm-docs/llm.commands.txt How do I deploy to staging?
  2. Referenciados en CLAUDE.md:
    markdown
    ## LLM Documentation
    Ver `llm-docs/` para documentación optimizada para AI.
  3. Mantenidos automáticamente:
    bash
    npm run generate:llms  # Regenerar después de cambios
生成文件后,可用于以下场景:
  1. 在AI提示词中引用
    @llm-docs/llm.commands.txt How do I deploy to staging?
  2. 在CLAUDE.md中引用
    markdown
    ## LLM Documentation
    Ver `llm-docs/` para documentación optimizada para AI.
  3. 自动维护
    bash
    npm run generate:llms  # 变更后重新生成