xcrawl-search
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseXCrawl Search
XCrawl 搜索
Overview
概述
This skill uses XCrawl Search API to retrieve query-based results.
Default behavior is raw passthrough: return upstream API response bodies as-is.
本Skill使用XCrawl Search API获取基于查询的结果。默认行为是直接透传:原样返回上游API的响应体。
Required Local Config
本地配置要求
Before using this skill, the user must create a local config file and write into it.
XCRAWL_API_KEYPath:
~/.xcrawl/config.jsonjson
{
"XCRAWL_API_KEY": "<your_api_key>"
}Read API key from local config file only. Do not require global environment variables.
使用此Skill前,用户必须创建本地配置文件并将写入其中。
XCRAWL_API_KEY路径:
~/.xcrawl/config.jsonjson
{
"XCRAWL_API_KEY": "<your_api_key>"
}仅从本地配置文件读取API密钥,不依赖全局环境变量。
Credits and Account Setup
积分与账户设置
Using XCrawl APIs consumes credits.
If the user does not have an account or available credits, guide them to register at .
After registration, they can activate the free credits plan before running requests.
https://dash.xcrawl.com/1000使用XCrawl API会消耗积分。如果用户没有账户或可用积分,引导他们前往注册。注册后,他们可以激活免费的1000积分套餐,之后再发起请求。
https://dash.xcrawl.com/Tool Permission Policy
工具权限策略
Request runtime permissions for and only.
Do not request Python, shell helper scripts, or other runtime permissions.
curlnode仅请求和的运行时权限。不要请求Python、Shell辅助脚本或其他运行时权限。
curlnodeAPI Surface
API接口
- Search endpoint:
POST /v1/search - Base URL:
https://run.xcrawl.com - Required header:
Authorization: Bearer <XCRAWL_API_KEY>
- 搜索端点:
POST /v1/search - 基础URL:
https://run.xcrawl.com - 必填请求头:
Authorization: Bearer <XCRAWL_API_KEY>
Usage Examples
使用示例
cURL
cURL
bash
API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")"
curl -sS -X POST "https://run.xcrawl.com/v1/search" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${API_KEY}" \
-d '{"query":"AI web crawler API","location":"US","language":"en","limit":20}'bash
API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")"
curl -sS -X POST "https://run.xcrawl.com/v1/search" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${API_KEY}" \
-d '{"query":"AI web crawler API","location":"US","language":"en","limit":20}'Node
Node
bash
node -e '
const fs=require("fs");
const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;
const body={query:"web scraping pricing",location:"DE",language:"de",limit:30};
fetch("https://run.xcrawl.com/v1/search",{
method:"POST",
headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`},
body:JSON.stringify(body)
}).then(async r=>{console.log(await r.text());});
'bash
node -e '
const fs=require("fs");
const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;
const body={query:"web scraping pricing",location:"DE",language:"de",limit:30};
fetch("https://run.xcrawl.com/v1/search",{
method:"POST",
headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`},
body:JSON.stringify(body)
}).then(async r=>{console.log(await r.text());});
'Request Parameters
请求参数
Request endpoint and headers
请求端点与请求头
- Endpoint:
POST https://run.xcrawl.com/v1/search - Headers:
Content-Type: application/jsonAuthorization: Bearer <api_key>
- 端点:
POST https://run.xcrawl.com/v1/search - 请求头:
Content-Type: application/jsonAuthorization: Bearer <api_key>
Request body: top-level fields
请求体:顶级字段
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
| string | Yes | - | Search query |
| string | No | | Location (country/city/region name or ISO code; best effort) |
| string | No | | Language (ISO 639-1) |
| integer | No | | Max results ( |
| 字段 | 类型 | 必填 | 默认值 | 描述 |
|---|---|---|---|---|
| string | 是 | - | 搜索查询词 |
| string | 否 | | 位置(国家/城市/地区名称或ISO代码;尽最大努力匹配) |
| string | 否 | | 语言(ISO 639-1标准) |
| integer | 否 | | 最大结果数( |
Response Parameters
响应参数
| Field | Type | Description |
|---|---|---|
| string | Task ID |
| string | Always |
| string | Version |
| string | |
| string | Search query |
| object | Search result data |
| string | Start time (ISO 8601) |
| string | End time (ISO 8601) |
| integer | Total credits used |
data- Concrete result schema is implementation-defined
- Includes billing fields like and
credits_usedcredits_detail
| 字段 | 类型 | 描述 |
|---|---|---|
| string | 任务ID |
| string | 固定为 |
| string | 版本号 |
| string | |
| string | 搜索查询词 |
| object | 搜索结果数据 |
| string | 开始时间(ISO 8601格式) |
| string | 结束时间(ISO 8601格式) |
| integer | 总消耗积分 |
当前API参考中关于的说明:
data- 具体结果结构由实现定义
- 包含和
credits_used等计费相关字段credits_detail
Workflow
工作流程
- Rewrite the request as a clear search objective.
- Include entity, geography, language, and freshness intent.
- Build and execute .
POST /v1/search
- Keep request explicit and deterministic.
- Return raw API response directly.
- Do not synthesize relevance summaries unless requested.
- 将请求改写为清晰的搜索目标。
- 包含实体、地域、语言和时效性需求。
- 构建并执行请求。
POST /v1/search
- 确保请求明确且可预测。
- 直接返回原始API响应。
- 除非用户明确要求,否则不生成相关性摘要。
Output Contract
输出约定
Return:
- Endpoint used ()
POST /v1/search - used for the request
request_payload - Raw response body from search call
- Error details when request fails
Do not generate summaries unless the user explicitly requests a summary.
返回内容包括:
- 使用的端点()
POST /v1/search - 请求使用的
request_payload - 搜索调用返回的原始响应体
- 请求失败时的错误详情
除非用户明确要求,否则不生成摘要。
Guardrails
约束规则
- Do not claim ranking guarantees that the API does not expose.
- Do not fabricate unavailable filters or response fields.
- Do not hardcode provider-specific tool schemas in core logic.
- 不得宣称API未提供的排名保证。
- 不得编造不存在的过滤器或响应字段。
- 核心逻辑中不得硬编码特定供应商的工具架构。