ai-model-wechat

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

When to use this skill

何时使用此Skill

Use this skill for calling AI models in WeChat Mini Program using
wx.cloud.extend.AI
.
Use it when you need to:
  • Integrate AI text generation in a Mini Program
  • Stream AI responses with callback support
  • Call Hunyuan models from WeChat environment
Do NOT use for:
  • Browser/Web apps → use
    ai-model-web
    skill
  • Node.js backend or cloud functions → use
    ai-model-nodejs
    skill
  • Image generation → use
    ai-model-nodejs
    skill (not available in Mini Program)
  • HTTP API integration → use
    http-api
    skill

此Skill适用于通过
wx.cloud.extend.AI
在微信小程序中调用AI模型。
适用于以下场景:
  • 在小程序中集成AI文本生成功能
  • 通过回调支持流式获取AI响应
  • 在微信环境中调用Hunyuan模型
不适用于以下场景:
  • 浏览器/网页应用 → 使用
    ai-model-web
    Skill
  • Node.js后端或云函数 → 使用
    ai-model-nodejs
    Skill
  • 图像生成 → 使用
    ai-model-nodejs
    Skill(小程序暂不支持)
  • HTTP API集成 → 使用
    http-api
    Skill

Available Providers and Models

可用的服务商与模型

CloudBase provides these built-in providers and models:
ProviderModelsRecommended
hunyuan-exp
hunyuan-turbos-latest
,
hunyuan-t1-latest
,
hunyuan-2.0-thinking-20251109
,
hunyuan-2.0-instruct-20251111
hunyuan-2.0-instruct-20251111
deepseek
deepseek-r1-0528
,
deepseek-v3-0324
,
deepseek-v3.2
deepseek-v3.2

CloudBase提供以下内置服务商和模型:
服务商模型推荐使用
hunyuan-exp
hunyuan-turbos-latest
,
hunyuan-t1-latest
,
hunyuan-2.0-thinking-20251109
,
hunyuan-2.0-instruct-20251111
hunyuan-2.0-instruct-20251111
deepseek
deepseek-r1-0528
,
deepseek-v3-0324
,
deepseek-v3.2
deepseek-v3.2

Prerequisites

前置条件

  • WeChat base library 3.7.1+
  • No extra SDK installation needed

  • 微信基础库 3.7.1+
  • 无需额外安装SDK

Initialization

初始化

js
// app.js
App({
  onLaunch: function() {
    wx.cloud.init({ env: "<YOUR_ENV_ID>" });
  }
})

js
// app.js
App({
  onLaunch: function() {
    wx.cloud.init({ env: "<YOUR_ENV_ID>" });
  }
})

generateText() - Non-streaming

generateText() - 非流式调用

⚠️ Different from JS/Node SDK: Return value is raw model response.
js
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");

const res = await model.generateText({
  model: "hunyuan-2.0-instruct-20251111",  // Recommended model
  messages: [{ role: "user", content: "你好" }],
});

// ⚠️ Return value is RAW model response, NOT wrapped like JS/Node SDK
console.log(res.choices[0].message.content);  // Access via choices array
console.log(res.usage);                        // Token usage

⚠️ 与JS/Node SDK的区别: 返回值为模型原始响应。
js
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");

const res = await model.generateText({
  model: "hunyuan-2.0-instruct-20251111",  // 推荐使用的模型
  messages: [{ role: "user", content: "你好" }],
});

// ⚠️ 返回值为模型原始响应,并非像JS/Node SDK那样经过包装
console.log(res.choices[0].message.content);  // 通过choices数组访问结果
console.log(res.usage);                        // 查看Token使用情况

streamText() - Streaming

streamText() - 流式调用

⚠️ Different from JS/Node SDK: Must wrap parameters in
data
object, supports callbacks.
js
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");

// ⚠️ Parameters MUST be wrapped in `data` object
const res = await model.streamText({
  data: {                              // ⚠️ Required wrapper
    model: "hunyuan-2.0-instruct-20251111",  // Recommended model
    messages: [{ role: "user", content: "hi" }]
  },
  onText: (text) => {                  // Optional: incremental text callback
    console.log("New text:", text);
  },
  onEvent: ({ data }) => {             // Optional: raw event callback
    console.log("Event:", data);
  },
  onFinish: (fullText) => {            // Optional: completion callback
    console.log("Done:", fullText);
  }
});

// Async iteration also available
for await (let str of res.textStream) {
  console.log(str);
}

// Check for completion with eventStream
for await (let event of res.eventStream) {
  console.log(event);
  if (event.data === "[DONE]") {       // ⚠️ Check for [DONE] to stop
    break;
  }
}

⚠️ 与JS/Node SDK的区别: 必须将参数包装在
data
对象中,支持回调函数。
js
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");

// ⚠️ 参数必须包装在`data`对象中
const res = await model.streamText({
  data: {                              // ⚠️ 必填的包装对象
    model: "hunyuan-2.0-instruct-20251111",  // 推荐使用的模型
    messages: [{ role: "user", content: "hi" }]
  },
  onText: (text) => {                  // 可选:增量文本回调
    console.log("New text:", text);
  },
  onEvent: ({ data }) => {             // 可选:原始事件回调
    console.log("Event:", data);
  },
  onFinish: (fullText) => {            // 可选:完成回调
    console.log("Done:", fullText);
  }
});

// 也支持异步迭代
for await (let str of res.textStream) {
  console.log(str);
}

// 通过eventStream检查调用是否完成
for await (let event of res.eventStream) {
  console.log(event);
  if (event.data === "[DONE]") {       // ⚠️ 检测[DONE]以停止迭代
    break;
  }
}

API Comparison: JS/Node SDK vs WeChat Mini Program

API对比:JS/Node SDK vs 微信小程序

FeatureJS/Node SDKWeChat Mini Program
Namespace
app.ai()
wx.cloud.extend.AI
generateText paramsDirect objectDirect object
generateText return
{ text, usage, messages }
Raw:
{ choices, usage }
streamText paramsDirect object⚠️ Wrapped in
data: {...}
streamText return
{ textStream, dataStream }
{ textStream, eventStream }
CallbacksNot supported
onText
,
onEvent
,
onFinish
Image generationNode SDK onlyNot available

功能JS/Node SDK微信小程序
命名空间
app.ai()
wx.cloud.extend.AI
generateText参数直接传入对象直接传入对象
generateText返回值
{ text, usage, messages }
原始格式:
{ choices, usage }
streamText参数直接传入对象⚠️ 必须包装在
data: {...}
streamText返回值
{ textStream, dataStream }
{ textStream, eventStream }
回调函数不支持
onText
,
onEvent
,
onFinish
图像生成仅Node SDK支持暂不支持

Type Definitions

类型定义

streamText() Input

streamText() 输入参数

ts
interface WxStreamTextInput {
  data: {                              // ⚠️ Required wrapper object
    model: string;
    messages: Array<{
      role: "user" | "system" | "assistant";
      content: string;
    }>;
  };
  onText?: (text: string) => void;     // Incremental text callback
  onEvent?: (prop: { data: string }) => void;  // Raw event callback
  onFinish?: (text: string) => void;   // Completion callback
}
ts
interface WxStreamTextInput {
  data: {                              // ⚠️ 必填的包装对象
    model: string;
    messages: Array<{
      role: "user" | "system" | "assistant";
      content: string;
    }>;
  };
  onText?: (text: string) => void;     // 增量文本回调
  onEvent?: (prop: { data: string }) => void;  // 原始事件回调
  onFinish?: (text: string) => void;   // 完成回调
}

streamText() Return

streamText() 返回值

ts
interface WxStreamTextResult {
  textStream: AsyncIterable<string>;   // Incremental text stream
  eventStream: AsyncIterable<{         // Raw event stream
    event?: unknown;
    id?: unknown;
    data: string;                      // "[DONE]" when complete
  }>;
}
ts
interface WxStreamTextResult {
  textStream: AsyncIterable<string>;   // 增量文本流
  eventStream: AsyncIterable<{         // 原始事件流
    event?: unknown;
    id?: unknown;
    data: string;                      // 完成时返回"[DONE]"
  }>;
}

generateText() Return

generateText() 返回值

ts
// Raw model response (OpenAI-compatible format)
interface WxGenerateTextResponse {
  id: string;
  object: "chat.completion";
  created: number;
  model: string;
  choices: Array<{
    index: number;
    message: {
      role: "assistant";
      content: string;
    };
    finish_reason: string;
  }>;
  usage: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  };
}

ts
// 模型原始响应(兼容OpenAI格式)
interface WxGenerateTextResponse {
  id: string;
  object: "chat.completion";
  created: number;
  model: string;
  choices: Array<{
    index: number;
    message: {
      role: "assistant";
      content: string;
    };
    finish_reason: string;
  }>;
  usage: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  };
}

Best Practices

最佳实践

  1. Check base library version - Ensure 3.7.1+ for AI support
  2. Use callbacks for UI updates -
    onText
    is great for real-time display
  3. Check for [DONE] - When using
    eventStream
    , check
    event.data === "[DONE]"
    to stop
  4. Handle errors gracefully - Wrap AI calls in try/catch
  5. Remember the
    data
    wrapper
    - streamText params must be wrapped in
    data: {...}
  1. 检查基础库版本 - 确保使用3.7.1及以上版本以支持AI功能
  2. 使用回调更新UI -
    onText
    非常适合实时显示响应内容
  3. 检测[DONE]标识 - 使用
    eventStream
    时,通过
    event.data === "[DONE]"
    判断是否完成
  4. 优雅处理错误 - 将AI调用代码包裹在try/catch块中
  5. 记得使用data包装 - streamText的参数必须包裹在
    data: {...}