xcrawl-search

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

XCrawl Search

XCrawl 搜索

Overview

概述

This skill uses XCrawl Search API to retrieve query-based results. Default behavior is raw passthrough: return upstream API response bodies as-is.
本Skill使用XCrawl Search API获取基于查询的结果。默认行为是直接透传:原样返回上游API的响应体。

Required Local Config

本地配置要求

Before using this skill, the user must create a local config file and write
XCRAWL_API_KEY
into it.
Path:
~/.xcrawl/config.json
json
{
  "XCRAWL_API_KEY": "<your_api_key>"
}
Read API key from local config file only. Do not require global environment variables.
使用此Skill前,用户必须创建本地配置文件并将
XCRAWL_API_KEY
写入其中。
路径:
~/.xcrawl/config.json
json
{
  "XCRAWL_API_KEY": "<your_api_key>"
}
仅从本地配置文件读取API密钥,不依赖全局环境变量。

Credits and Account Setup

积分与账户设置

Using XCrawl APIs consumes credits. If the user does not have an account or available credits, guide them to register at
https://dash.xcrawl.com/
. After registration, they can activate the free
1000
credits plan before running requests.
使用XCrawl API会消耗积分。如果用户没有账户或可用积分,引导他们前往
https://dash.xcrawl.com/
注册。注册后,他们可以激活免费的1000积分套餐,之后再发起请求。

Tool Permission Policy

工具权限策略

Request runtime permissions for
curl
and
node
only. Do not request Python, shell helper scripts, or other runtime permissions.
仅请求
curl
node
的运行时权限。不要请求Python、Shell辅助脚本或其他运行时权限。

API Surface

API接口

  • Search endpoint:
    POST /v1/search
  • Base URL:
    https://run.xcrawl.com
  • Required header:
    Authorization: Bearer <XCRAWL_API_KEY>
  • 搜索端点:
    POST /v1/search
  • 基础URL:
    https://run.xcrawl.com
  • 必填请求头:
    Authorization: Bearer <XCRAWL_API_KEY>

Usage Examples

使用示例

cURL

cURL

bash
API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")"

curl -sS -X POST "https://run.xcrawl.com/v1/search" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d '{"query":"AI web crawler API","location":"US","language":"en","limit":20}'
bash
API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")"

curl -sS -X POST "https://run.xcrawl.com/v1/search" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d '{"query":"AI web crawler API","location":"US","language":"en","limit":20}'

Node

Node

bash
node -e '
const fs=require("fs");
const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;
const body={query:"web scraping pricing",location:"DE",language:"de",limit:30};
fetch("https://run.xcrawl.com/v1/search",{
  method:"POST",
  headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`},
  body:JSON.stringify(body)
}).then(async r=>{console.log(await r.text());});
'
bash
node -e '
const fs=require("fs");
const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;
const body={query:"web scraping pricing",location:"DE",language:"de",limit:30};
fetch("https://run.xcrawl.com/v1/search",{
  method:"POST",
  headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`},
  body:JSON.stringify(body)
}).then(async r=>{console.log(await r.text());});
'

Request Parameters

请求参数

Request endpoint and headers

请求端点与请求头

  • Endpoint:
    POST https://run.xcrawl.com/v1/search
  • Headers:
  • Content-Type: application/json
  • Authorization: Bearer <api_key>
  • 端点:
    POST https://run.xcrawl.com/v1/search
  • 请求头:
  • Content-Type: application/json
  • Authorization: Bearer <api_key>

Request body: top-level fields

请求体:顶级字段

FieldTypeRequiredDefaultDescription
query
stringYes-Search query
location
stringNo
US
Location (country/city/region name or ISO code; best effort)
language
stringNo
en
Language (ISO 639-1)
limit
integerNo
10
Max results (
1-100
)
字段类型必填默认值描述
query
string-搜索查询词
location
string
US
位置(国家/城市/地区名称或ISO代码;尽最大努力匹配)
language
string
en
语言(ISO 639-1标准)
limit
integer
10
最大结果数(
1-100

Response Parameters

响应参数

FieldTypeDescription
search_id
stringTask ID
endpoint
stringAlways
search
version
stringVersion
status
string
completed
query
stringSearch query
data
objectSearch result data
started_at
stringStart time (ISO 8601)
ended_at
stringEnd time (ISO 8601)
total_credits_used
integerTotal credits used
data
notes from current API reference:
  • Concrete result schema is implementation-defined
  • Includes billing fields like
    credits_used
    and
    credits_detail
字段类型描述
search_id
string任务ID
endpoint
string固定为
search
version
string版本号
status
string
completed
query
string搜索查询词
data
object搜索结果数据
started_at
string开始时间(ISO 8601格式)
ended_at
string结束时间(ISO 8601格式)
total_credits_used
integer总消耗积分
当前API参考中关于
data
的说明:
  • 具体结果结构由实现定义
  • 包含
    credits_used
    credits_detail
    等计费相关字段

Workflow

工作流程

  1. Rewrite the request as a clear search objective.
  • Include entity, geography, language, and freshness intent.
  1. Build and execute
    POST /v1/search
    .
  • Keep request explicit and deterministic.
  1. Return raw API response directly.
  • Do not synthesize relevance summaries unless requested.
  1. 将请求改写为清晰的搜索目标。
  • 包含实体、地域、语言和时效性需求。
  1. 构建并执行
    POST /v1/search
    请求。
  • 确保请求明确且可预测。
  1. 直接返回原始API响应。
  • 除非用户明确要求,否则不生成相关性摘要。

Output Contract

输出约定

Return:
  • Endpoint used (
    POST /v1/search
    )
  • request_payload
    used for the request
  • Raw response body from search call
  • Error details when request fails
Do not generate summaries unless the user explicitly requests a summary.
返回内容包括:
  • 使用的端点(
    POST /v1/search
  • 请求使用的
    request_payload
  • 搜索调用返回的原始响应体
  • 请求失败时的错误详情
除非用户明确要求,否则不生成摘要。

Guardrails

约束规则

  • Do not claim ranking guarantees that the API does not expose.
  • Do not fabricate unavailable filters or response fields.
  • Do not hardcode provider-specific tool schemas in core logic.
  • 不得宣称API未提供的排名保证。
  • 不得编造不存在的过滤器或响应字段。
  • 核心逻辑中不得硬编码特定供应商的工具架构。