fal-ai-model-search
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseFal AI Model Search & Integration
Fal AI模型搜索与集成
Guide for searching and integrating Fal AI models from fal.ai platform.
来自fal.ai平台的Fal AI模型搜索与集成指南。
Workflow
工作流程
1. Search Models
1. 搜索模型
When user wants to find a model, use WebFetch to search:
https://fal.ai/explore/search?q=<search-query>Process:
- Parse search results - look for model IDs in format (e.g.,
<namespace>/<model-name>,fal-ai/flux-schnell)xai/grok-imagine-image - Extract: model ID, category (text-to-image, image-to-image, etc.), description
- Present the found models to the user with:
- Model name and category
- Description
- Playground link: (for xai:
https://fal.ai/models/<full-model-id>; others:https://fal.ai/models/xai/<model>)https://fal.ai/models/fal-ai/<model>
- Wait for user confirmation on which model to use
Important: The full Model ID format depends on the model namespace:
| Search Result | Full Model ID | API Endpoint |
|---|---|---|
| | |
| | |
| | |
Rule: models use their ID directly; all others need prefix.
xai/fal-ai/当用户需要查找模型时,使用WebFetch进行搜索:
https://fal.ai/explore/search?q=<search-query>步骤:
- 解析搜索结果 - 查找格式为的模型ID(例如:
<namespace>/<model-name>、fal-ai/flux-schnell)xai/grok-imagine-image - 提取:模型ID、类别(文本转图像、图像转图像等)、描述
- 向用户展示找到的模型,包含:
- 模型名称和类别
- 描述
- 在线测试链接:(xai模型:
https://fal.ai/models/<full-model-id>;其他模型:https://fal.ai/models/xai/<model>)https://fal.ai/models/fal-ai/<model>
- 等待用户确认要使用的模型
重要说明: 完整模型ID的格式取决于模型命名空间:
| 搜索结果 | 完整模型ID | API端点 |
|---|---|---|
| | |
| | |
| | |
规则: 开头的模型直接使用其ID;所有其他模型需要添加前缀。
xai/fal-ai/2. Get Model Details
2. 获取模型详情
Once user confirms a model, fetch its detailed information:
URL Pattern Rules:
| Model Prefix | llms.txt URL | Playground URL | Full Model ID |
|---|---|---|---|
| | | |
| Others | | | |
Rule: If model ID starts with , do NOT prepend . For all other models, prepend .
xai/fal-ai/fal-ai/Examples:
- → llms.txt:
xai/grok-imagine-image, Playground:.../models/xai/grok-imagine-image/llms.txt.../models/xai/grok-imagine-image - → llms.txt:
bytedance/seedream/v4.5/text-to-image, Playground:.../models/fal-ai/bytedance/seedream/v4.5/text-to-image/llms.txt.../models/fal-ai/bytedance/seedream/v4.5/text-to-image
When presenting model details to user, always include:
- Model ID
- Playground link: For interactive testing
- API endpoint:
https://fal.run/<full-model-id> - Documentation links: llms.txt, API docs ()
https://fal.ai/models/<full-model-id>/api
Process:
- Parse the llms.txt content for:
- Model capabilities and use cases
- API endpoint and parameters
- Code examples
- Pricing information
- Present the key details to the user
- Confirm integration approach with user
用户确认模型后,获取其详细信息:
URL模式规则:
| 模型前缀 | llms.txt URL | 在线测试链接 | 完整模型ID |
|---|---|---|---|
| | | |
| 其他 | | | |
规则: 如果模型ID以开头,请勿添加前缀。对于所有其他模型,添加前缀。
xai/fal-ai/fal-ai/示例:
- → llms.txt:
xai/grok-imagine-image,在线测试链接:.../models/xai/grok-imagine-image/llms.txt.../models/xai/grok-imagine-image - → llms.txt:
bytedance/seedream/v4.5/text-to-image,在线测试链接:.../models/fal-ai/bytedance/seedream/v4.5/text-to-image/llms.txt.../models/fal-ai/bytedance/seedream/v4.5/text-to-image
向用户展示模型详情时,需始终包含:
- 模型ID
- 在线测试链接:用于交互式测试
- API端点:
https://fal.run/<full-model-id> - 文档链接:llms.txt、API文档()
https://fal.ai/models/<full-model-id>/api
步骤:
- 解析llms.txt内容,提取:
- 模型功能和使用场景
- API端点和参数
- 代码示例
- 定价信息
- 向用户展示关键详情
- 与用户确认集成方式
3. Integration Guidelines
3. 集成指南
When integrating a Fal AI model:
- Check existing Fal client setup in the project
- Use the model ID returned from search (format: )
fal-ai/<model-name> - Follow the API patterns shown in the llms.txt examples
- Add proper error handling for API calls
集成Fal AI模型时:
- 检查项目中现有的Fal客户端配置
- 使用搜索返回的模型ID(格式:)
fal-ai/<model-name> - 遵循llms.txt示例中的API模式
- 为API调用添加适当的错误处理
4. User Confirmation Points
4. 用户确认节点
Always confirm with user at these stages:
- After search results: "Found these models: [list with links]. Which one would you like to use?"
- After model details: "Here are the details for [model] ([playground link]). Shall I proceed with integration?"
- Before code changes: "I'll integrate [model] by [changes]. Does this look correct?"
Link format for presentation:
- Use markdown format:
[description](url) - Always include the model's playground link so users can test it interactively
在以下阶段始终与用户确认:
- 搜索结果展示后:“找到以下模型:[带链接的列表]。您想使用哪一个?”
- 模型详情展示后:“这是[模型]的详情([在线测试链接])。是否继续集成?”
- 代码修改前:“我将通过[修改内容]集成[模型]。这样是否正确?”
展示链接的格式:
- 使用markdown格式:
[描述](链接地址) - 始终包含模型的在线测试链接,方便用户进行交互式测试
Common Model Patterns
常见模型类型
| Model Type | Example IDs | Use Case |
|---|---|---|
| Text to Image | | Generate images from text |
| Image to Image | | Transform existing images |
| Video | | Generate video content |
| Audio | | Text-to-speech, audio generation |
| 模型类型 | 示例ID | 使用场景 |
|---|---|---|
| 文本转图像 | | 根据文本生成图像 |
| 图像转图像 | | 转换现有图像 |
| 视频 | | 生成视频内容 |
| 音频 | | 文本转语音、音频生成 |
Example Output Format
示例输出格式
When presenting search results to the user, use this format:
Found 3 models matching "flux":
1. **fal-ai/flux-schnell** (Text to Image)
- Fastest Flux model for high-quality image generation
- [Playground](https://fal.ai/models/fal-ai/flux-schnell) | [API Docs](https://fal.ai/models/fal-ai/flux-schnell/api)
2. **fal-ai/flux-pro/v1.1** (Text to Image)
- Professional-grade image generation with superior quality
- [Playground](https://fal.ai/models/fal-ai/flux-pro/v1.1) | [API Docs](https://fal.ai/models/fal-ai/flux-pro/v1.1/api)
Which model would you like to use?向用户展示搜索结果时,使用以下格式:
找到3个匹配“flux”的模型:
1. **fal-ai/flux-schnell**(文本转图像)
- 最快的Flux模型,用于生成高质量图像
- [在线测试](https://fal.ai/models/fal-ai/flux-schnell) | [API文档](https://fal.ai/models/fal-ai/flux-schnell/api)
2. **fal-ai/flux-pro/v1.1**(文本转图像)
- 专业级图像生成,质量卓越
- [在线测试](https://fal.ai/models/fal-ai/flux-pro/v1.1) | [API文档](https://fal.ai/models/fal-ai/flux-pro/v1.1/api)
您想使用哪一个模型?Error Handling
错误处理
| Error | Cause | Solution |
|---|---|---|
| No search results | Query too specific or no matching models | Suggest broader search terms or alternative keywords |
| Model not found (404) | Wrong URL pattern | Check if model starts with |
| llms.txt returns empty | Model deprecated or unavailable | Inform user the model may be deprecated; suggest alternatives |
| API key error (401) | Missing or invalid | Check environment variable or prompt user to set |
| Rate limit (429) | Too many requests | Implement exponential backoff; suggest retry after delay |
| Timeout | Model inference taking too long | For async models, use |
| 错误 | 原因 | 解决方案 |
|---|---|---|
| 无搜索结果 | 查询过于具体或无匹配模型 | 建议使用更宽泛的搜索词或替代关键词 |
| 模型未找到(404) | URL模式错误 | 检查模型是否以 |
| llms.txt返回空内容 | 模型已弃用或不可用 | 告知用户该模型可能已弃用;建议使用替代模型 |
| API密钥错误(401) | 缺少或无效的 | 检查环境变量或提示用户设置 |
| 请求超限(429) | 请求次数过多 | 实现指数退避机制;建议延迟后重试 |
| 超时 | 模型推理耗时过长 | 对于异步模型,使用 |
Notes
注意事项
- The endpoint returns machine-readable documentation optimized for LLMs
llms.txt - Model IDs can have various prefixes: ,
fal-ai/, etc. - always use the full ID from search resultsxai/ - Always present options to the user before making integration decisions
- 端点返回针对LLM优化的机器可读文档
llms.txt - 模型ID可以有不同的前缀:、
fal-ai/等 - 始终使用搜索结果中的完整IDxai/ - 在做出集成决策前,始终向用户提供选项