transformers

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Transformers

Transformers

Overview

概述

The Transformers library provides state-of-the-art machine learning models for NLP, computer vision, audio, and multimodal tasks. Apply this skill for quick inference through pipelines, comprehensive training via the Trainer API, and flexible text generation with various decoding strategies.
Transformers库提供适用于NLP、计算机视觉、音频及多模态任务的前沿机器学习模型。你可以通过pipeline实现快速推理,借助Trainer API完成全面训练,还能通过多种解码策略实现灵活的文本生成。

Core Capabilities

核心功能

1. Quick Inference with Pipelines

1. 借助Pipeline实现快速推理

For rapid inference without complex setup, use the
pipeline()
API. Pipelines abstract away tokenization, model invocation, and post-processing.
python
from transformers import pipeline
若无需复杂配置即可快速完成推理,可使用
pipeline()
API。Pipeline会自动封装分词、模型调用和后处理流程。
python
from transformers import pipeline

Text classification

文本分类

classifier = pipeline("text-classification") result = classifier("This product is amazing!")
classifier = pipeline("text-classification") result = classifier("This product is amazing!")

Named entity recognition

命名实体识别

ner = pipeline("token-classification") entities = ner("Sarah works at Microsoft in Seattle")
ner = pipeline("token-classification") entities = ner("Sarah works at Microsoft in Seattle")

Question answering

问答

qa = pipeline("question-answering") answer = qa(question="What is the capital?", context="Paris is the capital of France.")
qa = pipeline("question-answering") answer = qa(question="What is the capital?", context="Paris is the capital of France.")

Text generation

文本生成

generator = pipeline("text-generation", model="gpt2") text = generator("Once upon a time", max_length=50)
generator = pipeline("text-generation", model="gpt2") text = generator("Once upon a time", max_length=50)

Image classification

图像分类

image_classifier = pipeline("image-classification") predictions = image_classifier("image.jpg")

**When to use pipelines:**
- Quick prototyping and testing
- Simple inference tasks without custom logic
- Demonstrations and examples
- Production inference for standard tasks

**Available pipeline tasks:**
- **NLP**: text-classification, token-classification, question-answering, summarization, translation, text-generation, fill-mask, zero-shot-classification
- **Vision**: image-classification, object-detection, image-segmentation, depth-estimation, zero-shot-image-classification
- **Audio**: automatic-speech-recognition, audio-classification, text-to-audio
- **Multimodal**: image-to-text, visual-question-answering, image-text-to-text

For comprehensive pipeline documentation, see `references/pipelines.md`.
image_classifier = pipeline("image-classification") predictions = image_classifier("image.jpg")

**何时使用Pipeline:**
- 快速原型开发与测试
- 无需自定义逻辑的简单推理任务
- 演示与示例展示
- 标准任务的生产环境推理

**可用的Pipeline任务:**
- **NLP**:text-classification、token-classification、question-answering、summarization、translation、text-generation、fill-mask、zero-shot-classification
- **视觉**:image-classification、object-detection、image-segmentation、depth-estimation、zero-shot-image-classification
- **音频**:automatic-speech-recognition、audio-classification、text-to-audio
- **多模态**:image-to-text、visual-question-answering、image-text-to-text

如需查看完整的Pipeline文档,请参阅`references/pipelines.md`。

2. Model Training and Fine-Tuning

2. 模型训练与微调

Use the Trainer API for comprehensive model training with support for distributed training, mixed precision, and advanced optimization.
Basic training workflow:
python
from transformers import (
    AutoTokenizer,
    AutoModelForSequenceClassification,
    TrainingArguments,
    Trainer
)
from datasets import load_dataset
使用Trainer API进行全面的模型训练,支持分布式训练、混合精度及高级优化功能。
基础训练流程:
python
from transformers import (
    AutoTokenizer,
    AutoModelForSequenceClassification,
    TrainingArguments,
    Trainer
)
from datasets import load_dataset

1. Load and tokenize data

1. 加载并分词数据

dataset = load_dataset("imdb") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
dataset = load_dataset("imdb") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)

2. Load model

2. 加载模型

model = AutoModelForSequenceClassification.from_pretrained( "bert-base-uncased", num_labels=2 )
model = AutoModelForSequenceClassification.from_pretrained( "bert-base-uncased", num_labels=2 )

3. Configure training

3. 配置训练参数

training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=16, eval_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, )
training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=16, eval_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, )

4. Create trainer and train

4. 创建Trainer并开始训练

trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], )
trainer.train()

**Key training features:**
- Mixed precision training (fp16/bf16)
- Distributed training (multi-GPU, multi-node)
- Gradient accumulation
- Learning rate scheduling with warmup
- Checkpoint management
- Hyperparameter search
- Push to Hugging Face Hub

For detailed training documentation, see `references/training.md`.
trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], )
trainer.train()

**核心训练特性:**
- 混合精度训练(fp16/bf16)
- 分布式训练(多GPU、多节点)
- 梯度累积
- 带预热的学习率调度
- 检查点管理
- 超参数搜索
- 推送至Hugging Face Hub

如需详细的训练文档,请参阅`references/training.md`。

3. Text Generation

3. 文本生成

Generate text using various decoding strategies including greedy decoding, beam search, sampling, and more.
Generation strategies:
python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer("Once upon a time", return_tensors="pt")
使用多种解码策略生成文本,包括贪心解码、束搜索、采样等。
生成策略示例:
python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer("Once upon a time", return_tensors="pt")

Greedy decoding (deterministic)

贪心解码(确定性)

outputs = model.generate(**inputs, max_new_tokens=50)
outputs = model.generate(**inputs, max_new_tokens=50)

Beam search (explores multiple hypotheses)

束搜索(探索多个假设)

outputs = model.generate( **inputs, max_new_tokens=50, num_beams=5, early_stopping=True )
outputs = model.generate( **inputs, max_new_tokens=50, num_beams=5, early_stopping=True )

Sampling (creative, diverse)

采样(创造性、多样性)

outputs = model.generate( **inputs, max_new_tokens=50, do_sample=True, temperature=0.7, top_p=0.9, top_k=50 )

**Generation parameters:**
- `temperature`: Controls randomness (0.1-2.0)
- `top_k`: Sample from top-k tokens
- `top_p`: Nucleus sampling threshold
- `num_beams`: Number of beams for beam search
- `repetition_penalty`: Discourage repetition
- `no_repeat_ngram_size`: Prevent repeating n-grams

For comprehensive generation documentation, see `references/generation_strategies.md`.
outputs = model.generate( **inputs, max_new_tokens=50, do_sample=True, temperature=0.7, top_p=0.9, top_k=50 )

**生成参数说明:**
- `temperature`:控制随机性(取值0.1-2.0)
- `top_k`:从排名前k的token中采样
- `top_p`:核采样阈值
- `num_beams`:束搜索的束数量
- `repetition_penalty`:抑制重复内容
- `no_repeat_ngram_size`:防止重复n元语法

如需完整的生成策略文档,请参阅`references/generation_strategies.md`。

4. Task-Specific Patterns

4. 特定任务模式

Common task patterns with appropriate model classes:
Text Classification:
python
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained(
    "bert-base-uncased",
    num_labels=3,
    id2label={0: "negative", 1: "neutral", 2: "positive"}
)
Named Entity Recognition (Token Classification):
python
from transformers import AutoModelForTokenClassification

model = AutoModelForTokenClassification.from_pretrained(
    "bert-base-uncased",
    num_labels=9  # Number of entity types
)
Question Answering:
python
from transformers import AutoModelForQuestionAnswering

model = AutoModelForQuestionAnswering.from_pretrained("bert-base-uncased")
Summarization and Translation (Seq2Seq):
python
from transformers import AutoModelForSeq2SeqLM

model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
Image Classification:
python
from transformers import AutoModelForImageClassification

model = AutoModelForImageClassification.from_pretrained(
    "google/vit-base-patch16-224",
    num_labels=num_classes
)
For detailed task-specific workflows including data preprocessing, training, and evaluation, see
references/task_patterns.md
.
针对常见任务的模式及对应的模型类:
文本分类:
python
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained(
    "bert-base-uncased",
    num_labels=3,
    id2label={0: "negative", 1: "neutral", 2: "positive"}
)
命名实体识别(Token分类):
python
from transformers import AutoModelForTokenClassification

model = AutoModelForTokenClassification.from_pretrained(
    "bert-base-uncased",
    num_labels=9  # 实体类型数量
)
问答:
python
from transformers import AutoModelForQuestionAnswering

model = AutoModelForQuestionAnswering.from_pretrained("bert-base-uncased")
摘要与翻译(Seq2Seq):
python
from transformers import AutoModelForSeq2SeqLM

model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
图像分类:
python
from transformers import AutoModelForImageClassification

model = AutoModelForImageClassification.from_pretrained(
    "google/vit-base-patch16-224",
    num_labels=num_classes
)
如需包含数据预处理、训练和评估的详细特定任务流程,请参阅
references/task_patterns.md

Auto Classes

Auto类

Use Auto classes for automatic architecture selection based on model checkpoints:
python
from transformers import (
    AutoTokenizer,           # Tokenization
    AutoModel,               # Base model (hidden states)
    AutoModelForSequenceClassification,
    AutoModelForTokenClassification,
    AutoModelForQuestionAnswering,
    AutoModelForCausalLM,    # GPT-style
    AutoModelForMaskedLM,    # BERT-style
    AutoModelForSeq2SeqLM,   # T5, BART
    AutoProcessor,           # For multimodal models
    AutoImageProcessor,      # For vision models
)
使用Auto类根据模型 checkpoint自动选择架构:
python
from transformers import (
    AutoTokenizer,           # 分词
    AutoModel,               # 基础模型(隐藏状态)
    AutoModelForSequenceClassification,
    AutoModelForTokenClassification,
    AutoModelForQuestionAnswering,
    AutoModelForCausalLM,    # GPT风格
    AutoModelForMaskedLM,    # BERT风格
    AutoModelForSeq2SeqLM,   # T5、BART
    AutoProcessor,           # 多模态模型用
    AutoImageProcessor,      # 视觉模型用
)

Load any model by name

通过名称加载任意模型

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")

For comprehensive API documentation, see `references/api_reference.md`.
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")

如需完整的API文档,请参阅`references/api_reference.md`。

Model Loading and Optimization

模型加载与优化

Device placement:
python
model = AutoModel.from_pretrained("bert-base-uncased", device_map="auto")
Mixed precision:
python
model = AutoModel.from_pretrained(
    "model-name",
    torch_dtype=torch.float16  # or torch.bfloat16
)
Quantization:
python
from transformers import BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16
)

model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-7b-hf",
    quantization_config=quantization_config,
    device_map="auto"
)
设备放置:
python
model = AutoModel.from_pretrained("bert-base-uncased", device_map="auto")
混合精度:
python
model = AutoModel.from_pretrained(
    "model-name",
    torch_dtype=torch.float16  # 或torch.bfloat16
)
量化:
python
from transformers import BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16
)

model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-2-7b-hf",
    quantization_config=quantization_config,
    device_map="auto"
)

Common Workflows

常见工作流

Quick Inference Workflow

快速推理工作流

  1. Choose appropriate pipeline for task
  2. Load pipeline with optional model specification
  3. Pass inputs and get results
  4. For batch processing, pass list of inputs
See:
scripts/quick_inference.py
for comprehensive pipeline examples
  1. 为任务选择合适的pipeline
  2. 加载pipeline(可指定模型)
  3. 传入输入并获取结果
  4. 批量处理时,传入输入列表
参考:
scripts/quick_inference.py
包含多模态任务的完整pipeline示例

Training Workflow

训练工作流

  1. Load and preprocess dataset using 🤗 Datasets
  2. Tokenize data with appropriate tokenizer
  3. Load pre-trained model for specific task
  4. Configure TrainingArguments
  5. Create Trainer with model, data, and compute_metrics
  6. Train with
    trainer.train()
  7. Evaluate with
    trainer.evaluate()
  8. Save model and optionally push to Hub
See:
scripts/fine_tune_classifier.py
for complete training example
  1. 使用🤗 Datasets加载并预处理数据集
  2. 使用合适的分词器对数据分词
  3. 加载适用于特定任务的预训练模型
  4. 配置TrainingArguments
  5. 创建包含模型、数据和compute_metrics的Trainer
  6. 调用
    trainer.train()
    开始训练
  7. 调用
    trainer.evaluate()
    进行评估
  8. 保存模型,可选推送至Hub
参考:
scripts/fine_tune_classifier.py
包含完整的训练示例

Text Generation Workflow

文本生成工作流

  1. Load causal or seq2seq language model
  2. Load tokenizer and tokenize prompt
  3. Choose generation strategy (greedy, beam search, sampling)
  4. Configure generation parameters
  5. Generate with
    model.generate()
  6. Decode output tokens to text
See:
scripts/generate_text.py
for generation strategy examples
  1. 加载因果语言模型或seq2seq语言模型
  2. 加载分词器并对提示词分词
  3. 选择生成策略(贪心、束搜索、采样)
  4. 配置生成参数
  5. 调用
    model.generate()
    生成文本
  6. 将输出token解码为文本
参考:
scripts/generate_text.py
包含生成策略示例

Best Practices

最佳实践

  1. Use Auto classes for flexibility across different model architectures
  2. Batch processing for efficiency - process multiple inputs at once
  3. Device management - use
    device_map="auto"
    for automatic placement
  4. Memory optimization - enable fp16/bf16 or quantization for large models
  5. Checkpoint management - save checkpoints regularly and load best model
  6. Pipeline for quick tasks - use pipelines for standard inference tasks
  7. Custom metrics - define compute_metrics for task-specific evaluation
  8. Gradient accumulation - use for large effective batch sizes on limited memory
  9. Learning rate warmup - typically 5-10% of total training steps
  10. Hub integration - push trained models to Hub for sharing and versioning
  1. 使用Auto类:实现不同模型架构间的灵活性
  2. 批量处理:提升效率,同时处理多个输入
  3. 设备管理:使用
    device_map="auto"
    自动分配设备
  4. 内存优化:为大模型启用fp16/bf16或量化
  5. 检查点管理:定期保存检查点并加载最优模型
  6. Pipeline处理快速任务:使用pipeline完成标准推理任务
  7. 自定义指标:为特定任务定义compute_metrics
  8. 梯度累积:在内存有限时实现大有效批量大小
  9. 学习率预热:通常设置为总训练步数的5-10%
  10. Hub集成:将训练好的模型推送至Hub以实现共享和版本控制

Resources

资源

scripts/

scripts/

Executable Python scripts demonstrating common Transformers workflows:
  • quick_inference.py
    - Pipeline examples for NLP, vision, audio, and multimodal tasks
  • fine_tune_classifier.py
    - Complete fine-tuning workflow with Trainer API
  • generate_text.py
    - Text generation with various decoding strategies
Run scripts directly to see examples in action:
bash
python scripts/quick_inference.py
python scripts/fine_tune_classifier.py
python scripts/generate_text.py
展示Transformers常见工作流的可执行Python脚本:
  • quick_inference.py
    - NLP、视觉、音频及多模态任务的Pipeline示例
  • fine_tune_classifier.py
    - 基于Trainer API的完整微调工作流
  • generate_text.py
    - 多种解码策略的文本生成示例
直接运行脚本查看示例效果:
bash
python scripts/quick_inference.py
python scripts/fine_tune_classifier.py
python scripts/generate_text.py

references/

references/

Comprehensive reference documentation loaded into context as needed:
  • api_reference.md
    - Core classes and APIs (Auto classes, Trainer, GenerationConfig, etc.)
  • pipelines.md
    - All available pipelines organized by modality with examples
  • training.md
    - Training patterns, TrainingArguments, distributed training, callbacks
  • generation_strategies.md
    - Text generation methods, decoding strategies, parameters
  • task_patterns.md
    - Complete workflows for common tasks (classification, NER, QA, summarization, etc.)
When working on specific tasks or features, load the relevant reference file for detailed guidance.
根据需要加载的完整参考文档:
  • api_reference.md
    - 核心类与API(Auto类、Trainer、GenerationConfig等)
  • pipelines.md
    - 按模态分类的所有可用Pipeline及示例
  • training.md
    - 训练模式、TrainingArguments、分布式训练、回调函数
  • generation_strategies.md
    - 文本生成方法、解码策略、参数
  • task_patterns.md
    - 常见任务的完整工作流(分类、NER、QA、摘要等)
处理特定任务或功能时,加载相关参考文档获取详细指导。

Additional Information

附加信息