llamaindex

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

LlamaIndex - Data Framework for LLM Applications

LlamaIndex - 面向LLM应用的数据框架

The leading framework for connecting LLMs with your data.
连接LLM与你的数据的领先框架。

When to use LlamaIndex

何时使用LlamaIndex

Use LlamaIndex when:
  • Building RAG (retrieval-augmented generation) applications
  • Need document question-answering over private data
  • Ingesting data from multiple sources (300+ connectors)
  • Creating knowledge bases for LLMs
  • Building chatbots with enterprise data
  • Need structured data extraction from documents
Metrics:
  • 45,100+ GitHub stars
  • 23,000+ repositories use LlamaIndex
  • 300+ data connectors (LlamaHub)
  • 1,715+ contributors
  • v0.14.7 (stable)
Use alternatives instead:
  • LangChain: More general-purpose, better for agents
  • Haystack: Production search pipelines
  • txtai: Lightweight semantic search
  • Chroma: Just need vector storage
在以下场景使用LlamaIndex:
  • 构建RAG(检索增强生成)应用
  • 需要针对私有数据的文档问答功能
  • 从多源摄入数据(支持300+连接器)
  • 为LLM创建知识库
  • 构建基于企业数据的聊天机器人
  • 需要从文档中提取结构化数据
相关数据:
  • 45,100+ GitHub星标
  • 23,000+ 仓库使用LlamaIndex
  • 300+ 数据连接器(LlamaHub)
  • 1,715+ 贡献者
  • v0.14.7(稳定版)
可选择替代方案的场景:
  • LangChain: 更通用,更适合Agent开发
  • Haystack: 生产级搜索流水线
  • txtai: 轻量级语义搜索
  • Chroma: 仅需向量存储时

Quick start

快速开始

Installation

安装

bash
undefined
bash
undefined

Starter package (recommended)

入门包(推荐)

pip install llama-index
pip install llama-index

Or minimal core + specific integrations

或最小核心包 + 特定集成

pip install llama-index-core pip install llama-index-llms-openai pip install llama-index-embeddings-openai
undefined
pip install llama-index-core pip install llama-index-llms-openai pip install llama-index-embeddings-openai
undefined

5-line RAG example

5行代码实现RAG示例

python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

Load documents

加载文档

documents = SimpleDirectoryReader("data").load_data()
documents = SimpleDirectoryReader("data").load_data()

Create index

创建索引

index = VectorStoreIndex.from_documents(documents)
index = VectorStoreIndex.from_documents(documents)

Query

查询

query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?") print(response)
undefined
query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?") print(response)
undefined

Core concepts

核心概念

1. Data connectors - Load documents

1. 数据连接器 - 加载文档

python
from llama_index.core import SimpleDirectoryReader, Document
from llama_index.readers.web import SimpleWebPageReader
from llama_index.readers.github import GithubRepositoryReader
python
from llama_index.core import SimpleDirectoryReader, Document
from llama_index.readers.web import SimpleWebPageReader
from llama_index.readers.github import GithubRepositoryReader

Directory of files

加载目录文件

documents = SimpleDirectoryReader("./data").load_data()
documents = SimpleDirectoryReader("./data").load_data()

Web pages

加载网页

reader = SimpleWebPageReader() documents = reader.load_data(["https://example.com"])
reader = SimpleWebPageReader() documents = reader.load_data(["https://example.com"])

GitHub repository

加载GitHub仓库

reader = GithubRepositoryReader(owner="user", repo="repo") documents = reader.load_data(branch="main")
reader = GithubRepositoryReader(owner="user", repo="repo") documents = reader.load_data(branch="main")

Manual document creation

手动创建文档

doc = Document( text="This is the document content", metadata={"source": "manual", "date": "2025-01-01"} )
undefined
doc = Document( text="This is the document content", metadata={"source": "manual", "date": "2025-01-01"} )
undefined

2. Indices - Structure data

2. 索引 - 结构化数据

python
from llama_index.core import VectorStoreIndex, ListIndex, TreeIndex
python
from llama_index.core import VectorStoreIndex, ListIndex, TreeIndex

Vector index (most common - semantic search)

向量索引(最常用 - 语义搜索)

vector_index = VectorStoreIndex.from_documents(documents)
vector_index = VectorStoreIndex.from_documents(documents)

List index (sequential scan)

列表索引(顺序扫描)

list_index = ListIndex.from_documents(documents)
list_index = ListIndex.from_documents(documents)

Tree index (hierarchical summary)

树形索引(分层摘要)

tree_index = TreeIndex.from_documents(documents)
tree_index = TreeIndex.from_documents(documents)

Save index

保存索引

index.storage_context.persist(persist_dir="./storage")
index.storage_context.persist(persist_dir="./storage")

Load index

加载索引

from llama_index.core import load_index_from_storage, StorageContext storage_context = StorageContext.from_defaults(persist_dir="./storage") index = load_index_from_storage(storage_context)
undefined
from llama_index.core import load_index_from_storage, StorageContext storage_context = StorageContext.from_defaults(persist_dir="./storage") index = load_index_from_storage(storage_context)
undefined

3. Query engines - Ask questions

3. 查询引擎 - 提问

python
undefined
python
undefined

Basic query

基础查询

query_engine = index.as_query_engine() response = query_engine.query("What is the main topic?") print(response)
query_engine = index.as_query_engine() response = query_engine.query("What is the main topic?") print(response)

Streaming response

流式响应

query_engine = index.as_query_engine(streaming=True) response = query_engine.query("Explain quantum computing") for text in response.response_gen: print(text, end="", flush=True)
query_engine = index.as_query_engine(streaming=True) response = query_engine.query("Explain quantum computing") for text in response.response_gen: print(text, end="", flush=True)

Custom configuration

自定义配置

query_engine = index.as_query_engine( similarity_top_k=3, # Return top 3 chunks response_mode="compact", # Or "tree_summarize", "simple_summarize" verbose=True )
undefined
query_engine = index.as_query_engine( similarity_top_k=3, # 返回前3个片段 response_mode="compact", # 或 "tree_summarize"、"simple_summarize" verbose=True )
undefined

4. Retrievers - Find relevant chunks

4. 检索器 - 查找相关片段

python
undefined
python
undefined

Vector retriever

向量检索器

retriever = index.as_retriever(similarity_top_k=5) nodes = retriever.retrieve("machine learning")
retriever = index.as_retriever(similarity_top_k=5) nodes = retriever.retrieve("machine learning")

With filtering

带过滤的检索器

retriever = index.as_retriever( similarity_top_k=3, filters={"metadata.category": "tutorial"} )
retriever = index.as_retriever( similarity_top_k=3, filters={"metadata.category": "tutorial"} )

Custom retriever

自定义检索器

from llama_index.core.retrievers import BaseRetriever
class CustomRetriever(BaseRetriever): def _retrieve(self, query_bundle): # Your custom retrieval logic return nodes
undefined
from llama_index.core.retrievers import BaseRetriever
class CustomRetriever(BaseRetriever): def _retrieve(self, query_bundle): # 你的自定义检索逻辑 return nodes
undefined

Agents with tools

带工具的Agent

Basic agent

基础Agent

python
from llama_index.core.agent import FunctionAgent
from llama_index.llms.openai import OpenAI
python
from llama_index.core.agent import FunctionAgent
from llama_index.llms.openai import OpenAI

Define tools

定义工具

def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b
def add(a: int, b: int) -> int: """Add two numbers.""" return a + b
def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b
def add(a: int, b: int) -> int: """Add two numbers.""" return a + b

Create agent

创建Agent

llm = OpenAI(model="gpt-4o") agent = FunctionAgent.from_tools( tools=[multiply, add], llm=llm, verbose=True )
llm = OpenAI(model="gpt-4o") agent = FunctionAgent.from_tools( tools=[multiply, add], llm=llm, verbose=True )

Use agent

使用Agent

response = agent.chat("What is 25 * 17 + 142?") print(response)
undefined
response = agent.chat("What is 25 * 17 + 142?") print(response)
undefined

RAG agent (document search + tools)

RAG Agent(文档搜索 + 工具)

python
from llama_index.core.tools import QueryEngineTool
python
from llama_index.core.tools import QueryEngineTool

Create index as before

如之前一样创建索引

index = VectorStoreIndex.from_documents(documents)
index = VectorStoreIndex.from_documents(documents)

Wrap query engine as tool

将查询引擎包装为工具

query_tool = QueryEngineTool.from_defaults( query_engine=index.as_query_engine(), name="python_docs", description="Useful for answering questions about Python programming" )
query_tool = QueryEngineTool.from_defaults( query_engine=index.as_query_engine(), name="python_docs", description="Useful for answering questions about Python programming" )

Agent with document search + calculator

带文档搜索 + 计算器的Agent

agent = FunctionAgent.from_tools( tools=[query_tool, multiply, add], llm=llm )
agent = FunctionAgent.from_tools( tools=[query_tool, multiply, add], llm=llm )

Agent decides when to search docs vs calculate

Agent会自主决定何时搜索文档或进行计算

response = agent.chat("According to the docs, what is Python used for?")
undefined
response = agent.chat("According to the docs, what is Python used for?")
undefined

Advanced RAG patterns

高级RAG模式

Chat engine (conversational)

聊天引擎(会话式)

python
from llama_index.core.chat_engine import CondensePlusContextChatEngine
python
from llama_index.core.chat_engine import CondensePlusContextChatEngine

Chat with memory

带记忆的聊天

chat_engine = index.as_chat_engine( chat_mode="condense_plus_context", # Or "context", "react" verbose=True )
chat_engine = index.as_chat_engine( chat_mode="condense_plus_context", # 或 "context"、"react" verbose=True )

Multi-turn conversation

多轮对话

response1 = chat_engine.chat("What is Python?") response2 = chat_engine.chat("Can you give examples?") # Remembers context response3 = chat_engine.chat("What about web frameworks?")
undefined
response1 = chat_engine.chat("What is Python?") response2 = chat_engine.chat("Can you give examples?") # 记住上下文 response3 = chat_engine.chat("What about web frameworks?")
undefined

Metadata filtering

元数据过滤

python
from llama_index.core.vector_stores import MetadataFilters, ExactMatchFilter
python
from llama_index.core.vector_stores import MetadataFilters, ExactMatchFilter

Filter by metadata

按元数据过滤

filters = MetadataFilters( filters=[ ExactMatchFilter(key="category", value="tutorial"), ExactMatchFilter(key="difficulty", value="beginner") ] )
retriever = index.as_retriever( similarity_top_k=3, filters=filters )
query_engine = index.as_query_engine(filters=filters)
undefined
filters = MetadataFilters( filters=[ ExactMatchFilter(key="category", value="tutorial"), ExactMatchFilter(key="difficulty", value="beginner") ] )
retriever = index.as_retriever( similarity_top_k=3, filters=filters )
query_engine = index.as_query_engine(filters=filters)
undefined

Structured output

结构化输出

python
from pydantic import BaseModel
from llama_index.core.output_parsers import PydanticOutputParser

class Summary(BaseModel):
    title: str
    main_points: list[str]
    conclusion: str
python
from pydantic import BaseModel
from llama_index.core.output_parsers import PydanticOutputParser

class Summary(BaseModel):
    title: str
    main_points: list[str]
    conclusion: str

Get structured response

获取结构化响应

output_parser = PydanticOutputParser(output_cls=Summary) query_engine = index.as_query_engine(output_parser=output_parser)
response = query_engine.query("Summarize the document") summary = response # Pydantic model print(summary.title, summary.main_points)
undefined
output_parser = PydanticOutputParser(output_cls=Summary) query_engine = index.as_query_engine(output_parser=output_parser)
response = query_engine.query("Summarize the document") summary = response # Pydantic模型 print(summary.title, summary.main_points)
undefined

Data ingestion patterns

数据摄入模式

Multiple file types

多文件类型

python
undefined
python
undefined

Load all supported formats

加载所有支持的格式

documents = SimpleDirectoryReader( "./data", recursive=True, required_exts=[".pdf", ".docx", ".txt", ".md"] ).load_data()
undefined
documents = SimpleDirectoryReader( "./data", recursive=True, required_exts=[".pdf", ".docx", ".txt", ".md"] ).load_data()
undefined

Web scraping

网页爬取

python
from llama_index.readers.web import BeautifulSoupWebReader

reader = BeautifulSoupWebReader()
documents = reader.load_data(urls=[
    "https://docs.python.org/3/tutorial/",
    "https://docs.python.org/3/library/"
])
python
from llama_index.readers.web import BeautifulSoupWebReader

reader = BeautifulSoupWebReader()
documents = reader.load_data(urls=[
    "https://docs.python.org/3/tutorial/",
    "https://docs.python.org/3/library/"
])

Database

数据库

python
from llama_index.readers.database import DatabaseReader

reader = DatabaseReader(
    sql_database_uri="postgresql://user:pass@localhost/db"
)
documents = reader.load_data(query="SELECT * FROM articles")
python
from llama_index.readers.database import DatabaseReader

reader = DatabaseReader(
    sql_database_uri="postgresql://user:pass@localhost/db"
)
documents = reader.load_data(query="SELECT * FROM articles")

API endpoints

API端点

python
from llama_index.readers.json import JSONReader

reader = JSONReader()
documents = reader.load_data("https://api.example.com/data.json")
python
from llama_index.readers.json import JSONReader

reader = JSONReader()
documents = reader.load_data("https://api.example.com/data.json")

Vector store integrations

向量存储集成

Chroma (local)

Chroma(本地)

python
from llama_index.vector_stores.chroma import ChromaVectorStore
import chromadb
python
from llama_index.vector_stores.chroma import ChromaVectorStore
import chromadb

Initialize Chroma

初始化Chroma

db = chromadb.PersistentClient(path="./chroma_db") collection = db.get_or_create_collection("my_collection")
db = chromadb.PersistentClient(path="./chroma_db") collection = db.get_or_create_collection("my_collection")

Create vector store

创建向量存储

vector_store = ChromaVectorStore(chroma_collection=collection)
vector_store = ChromaVectorStore(chroma_collection=collection)

Use in index

在索引中使用

from llama_index.core import StorageContext storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefined
from llama_index.core import StorageContext storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefined

Pinecone (cloud)

Pinecone(云端)

python
from llama_index.vector_stores.pinecone import PineconeVectorStore
import pinecone
python
from llama_index.vector_stores.pinecone import PineconeVectorStore
import pinecone

Initialize Pinecone

初始化Pinecone

pinecone.init(api_key="your-key", environment="us-west1-gcp") pinecone_index = pinecone.Index("my-index")
pinecone.init(api_key="your-key", environment="us-west1-gcp") pinecone_index = pinecone.Index("my-index")

Create vector store

创建向量存储

vector_store = PineconeVectorStore(pinecone_index=pinecone_index) storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefined
vector_store = PineconeVectorStore(pinecone_index=pinecone_index) storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefined

FAISS (fast)

FAISS(快速)

python
from llama_index.vector_stores.faiss import FaissVectorStore
import faiss
python
from llama_index.vector_stores.faiss import FaissVectorStore
import faiss

Create FAISS index

创建FAISS索引

d = 1536 # Dimension of embeddings faiss_index = faiss.IndexFlatL2(d)
vector_store = FaissVectorStore(faiss_index=faiss_index) storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefined
d = 1536 # 嵌入向量维度 faiss_index = faiss.IndexFlatL2(d)
vector_store = FaissVectorStore(faiss_index=faiss_index) storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefined

Customization

自定义

Custom LLM

自定义LLM

python
from llama_index.llms.anthropic import Anthropic
from llama_index.core import Settings
python
from llama_index.llms.anthropic import Anthropic
from llama_index.core import Settings

Set global LLM

设置全局LLM

Settings.llm = Anthropic(model="claude-sonnet-4-5-20250929")
Settings.llm = Anthropic(model="claude-sonnet-4-5-20250929")

Now all queries use Anthropic

现在所有查询都使用Anthropic模型

query_engine = index.as_query_engine()
undefined
query_engine = index.as_query_engine()
undefined

Custom embeddings

自定义嵌入

python
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
python
from llama_index.embeddings.huggingface import HuggingFaceEmbedding

Use HuggingFace embeddings

使用HuggingFace嵌入模型

Settings.embed_model = HuggingFaceEmbedding( model_name="sentence-transformers/all-mpnet-base-v2" )
index = VectorStoreIndex.from_documents(documents)
undefined
Settings.embed_model = HuggingFaceEmbedding( model_name="sentence-transformers/all-mpnet-base-v2" )
index = VectorStoreIndex.from_documents(documents)
undefined

Custom prompt templates

自定义提示模板

python
from llama_index.core import PromptTemplate

qa_prompt = PromptTemplate(
    "Context: {context_str}\n"
    "Question: {query_str}\n"
    "Answer the question based only on the context. "
    "If the answer is not in the context, say 'I don't know'.\n"
    "Answer: "
)

query_engine = index.as_query_engine(text_qa_template=qa_prompt)
python
from llama_index.core import PromptTemplate

qa_prompt = PromptTemplate(
    "Context: {context_str}\n"
    "Question: {query_str}\n"
    "Answer the question based only on the context. "
    "If the answer is not in the context, say 'I don't know'.\n"
    "Answer: "
)

query_engine = index.as_query_engine(text_qa_template=qa_prompt)

Multi-modal RAG

多模态RAG

Image + text

图片 + 文本

python
from llama_index.core import SimpleDirectoryReader
from llama_index.multi_modal_llms.openai import OpenAIMultiModal
python
from llama_index.core import SimpleDirectoryReader
from llama_index.multi_modal_llms.openai import OpenAIMultiModal

Load images and documents

加载图片和文档

documents = SimpleDirectoryReader( "./data", required_exts=[".jpg", ".png", ".pdf"] ).load_data()
documents = SimpleDirectoryReader( "./data", required_exts=[".jpg", ".png", ".pdf"] ).load_data()

Multi-modal index

多模态索引

index = VectorStoreIndex.from_documents(documents)
index = VectorStoreIndex.from_documents(documents)

Query with multi-modal LLM

使用多模态LLM进行查询

multi_modal_llm = OpenAIMultiModal(model="gpt-4o") query_engine = index.as_query_engine(llm=multi_modal_llm)
response = query_engine.query("What is in the diagram on page 3?")
undefined
multi_modal_llm = OpenAIMultiModal(model="gpt-4o") query_engine = index.as_query_engine(llm=multi_modal_llm)
response = query_engine.query("What is in the diagram on page 3?")
undefined

Evaluation

评估

Response quality

响应质量

python
from llama_index.core.evaluation import RelevancyEvaluator, FaithfulnessEvaluator
python
from llama_index.core.evaluation import RelevancyEvaluator, FaithfulnessEvaluator

Evaluate relevance

评估相关性

relevancy = RelevancyEvaluator() result = relevancy.evaluate_response( query="What is Python?", response=response ) print(f"Relevancy: {result.passing}")
relevancy = RelevancyEvaluator() result = relevancy.evaluate_response( query="What is Python?", response=response ) print(f"Relevancy: {result.passing}")

Evaluate faithfulness (no hallucination)

评估忠实度(无幻觉)

faithfulness = FaithfulnessEvaluator() result = faithfulness.evaluate_response( query="What is Python?", response=response ) print(f"Faithfulness: {result.passing}")
undefined
faithfulness = FaithfulnessEvaluator() result = faithfulness.evaluate_response( query="What is Python?", response=response ) print(f"Faithfulness: {result.passing}")
undefined

Best practices

最佳实践

  1. Use vector indices for most cases - Best performance
  2. Save indices to disk - Avoid re-indexing
  3. Chunk documents properly - 512-1024 tokens optimal
  4. Add metadata - Enables filtering and tracking
  5. Use streaming - Better UX for long responses
  6. Enable verbose during dev - See retrieval process
  7. Evaluate responses - Check relevance and faithfulness
  8. Use chat engine for conversations - Built-in memory
  9. Persist storage - Don't lose your index
  10. Monitor costs - Track embedding and LLM usage
  1. 大多数场景使用向量索引 - 性能最佳
  2. 将索引保存到磁盘 - 避免重复索引
  3. 合理拆分文档 - 512-1024个token为最优
  4. 添加元数据 - 支持过滤与追踪
  5. 使用流式响应 - 长响应时用户体验更好
  6. 开发阶段启用verbose - 查看检索过程
  7. 评估响应 - 检查相关性与忠实度
  8. 对话场景使用聊天引擎 - 内置记忆功能
  9. 持久化存储 - 不要丢失你的索引
  10. 监控成本 - 追踪嵌入与LLM使用量

Common patterns

常见模式

Document Q&A system

文档问答系统

python
undefined
python
undefined

Complete RAG pipeline

完整的RAG流水线

documents = SimpleDirectoryReader("docs").load_data() index = VectorStoreIndex.from_documents(documents) index.storage_context.persist(persist_dir="./storage")
documents = SimpleDirectoryReader("docs").load_data() index = VectorStoreIndex.from_documents(documents) index.storage_context.persist(persist_dir="./storage")

Query

查询

query_engine = index.as_query_engine( similarity_top_k=3, response_mode="compact", verbose=True ) response = query_engine.query("What is the main topic?") print(response) print(f"Sources: {[node.metadata['file_name'] for node in response.source_nodes]}")
undefined
query_engine = index.as_query_engine( similarity_top_k=3, response_mode="compact", verbose=True ) response = query_engine.query("What is the main topic?") print(response) print(f"Sources: {[node.metadata['file_name'] for node in response.source_nodes]}")
undefined

Chatbot with memory

带记忆的聊天机器人

python
undefined
python
undefined

Conversational interface

会话式交互界面

chat_engine = index.as_chat_engine( chat_mode="condense_plus_context", verbose=True )
chat_engine = index.as_chat_engine( chat_mode="condense_plus_context", verbose=True )

Multi-turn chat

多轮聊天

while True: user_input = input("You: ") if user_input.lower() == "quit": break response = chat_engine.chat(user_input) print(f"Bot: {response}")
undefined
while True: user_input = input("You: ") if user_input.lower() == "quit": break response = chat_engine.chat(user_input) print(f"Bot: {response}")
undefined

Performance benchmarks

性能基准

OperationLatencyNotes
Index 100 docs~10-30sOne-time, can persist
Query (vector)~0.5-2sRetrieval + LLM
Streaming query~0.5s first tokenBetter UX
Agent with tools~3-8sMultiple tool calls
操作延迟说明
索引100个文档~10-30秒一次性操作,可持久化
向量查询~0.5-2秒检索 + LLM处理
流式查询~0.5秒返回首个token用户体验更好
带工具的Agent~3-8秒多工具调用

LlamaIndex vs LangChain

LlamaIndex vs LangChain

FeatureLlamaIndexLangChain
Best forRAG, document Q&AAgents, general LLM apps
Data connectors300+ (LlamaHub)100+
RAG focusCore featureOne of many
Learning curveEasier for RAGSteeper
CustomizationHighVery high
DocumentationExcellentGood
Use LlamaIndex when:
  • Your primary use case is RAG
  • Need many data connectors
  • Want simpler API for document Q&A
  • Building knowledge retrieval system
Use LangChain when:
  • Building complex agents
  • Need more general-purpose tools
  • Want more flexibility
  • Complex multi-step workflows
特性LlamaIndexLangChain
最适合场景RAG、文档问答Agent、通用LLM应用
数据连接器300+(LlamaHub)100+
RAG专注度核心功能众多功能之一
学习曲线针对RAG更简单更陡峭
自定义能力极高
文档质量优秀良好
选择LlamaIndex的场景:
  • 核心用例是RAG
  • 需要大量数据连接器
  • 想要更简洁的文档问答API
  • 构建知识检索系统
选择LangChain的场景:
  • 构建复杂Agent
  • 需要更多通用工具
  • 想要更高灵活性
  • 复杂多步骤工作流

References

参考资料

  • Query Engines Guide - Query modes, customization, streaming
  • Agents Guide - Tool creation, RAG agents, multi-step reasoning
  • Data Connectors Guide - 300+ connectors, custom loaders
  • 查询引擎指南 - 查询模式、自定义、流式响应
  • Agent指南 - 工具创建、RAG Agent、多步骤推理
  • 数据连接器指南 - 300+连接器、自定义加载器

Resources

资源