Loading...
Loading...
List available large language models and send chat completion requests programmatically. Use this skill when you need to call an LLM within a snippet, including model comparison, visual understanding, batch inference, and model performance testing.
npx skill4agent add dtyq/magic using-llmsdk.llmrun_python_snippet.pyshell_execcreate_openai_sync_client# Option 1: run_python_snippet
run_python_snippet(
python_code="""
from sdk.llm import create_openai_sync_client
client = create_openai_sync_client()
...
""",
script_path="temp_llm_xxx.py",
timeout=300,
)
# Option 2: write a .py file, then run with shell_exec
# First write the script with write_file, then execute:
shell_exec("python scripts/my_llm_script.py")timeout=120timeout=300run_python_snippet(
python_code="""
import json
from sdk.llm import create_openai_sync_client
client = create_openai_sync_client()
models = client.models.list()
print(json.dumps([{"id": m.id} for m in models.data], ensure_ascii=False, indent=2))
""",
script_path="temp_list_models.py",
)[
{"id": "claude-3-5-sonnet-20241022"},
{"id": "gpt-4o"},
{"id": "deepseek-v3"}
]run_python_snippet(
python_code="""
from sdk.llm import create_openai_sync_client
client = create_openai_sync_client()
response = client.chat.completions.create(
model="<模型ID>",
messages=[
{"role": "system", "content": "你是一个助手"},
{"role": "user", "content": "你好"},
],
extra_body={"thinking": {"type": "disabled"}},
)
print(response.choices[0].message.content)
""",
script_path="temp_chat.py",
timeout=120,
)| Function | Use Case |
|---|---|
| Use this first — returns a directly accessible URL |
| Fallback if |
IMPORTANT —return value: The function already returns a complete data URL string likeimage_to_base64. Use the return value directly asdata:image/jpeg;base64,/9j/4AAQ.... Do NOT prependurlagain — doing so will cause andata:image/jpeg;base64,error.Invalid base64 image_url
run_python_snippet(
python_code="""
from sdk.llm import create_openai_sync_client, file_to_url, image_to_base64
client = create_openai_sync_client()
# 优先使用 file_to_url / use file_to_url first
# 路径相对于 .workspace/ 目录 / path is relative to .workspace/
image_url = file_to_url("test/screenshot.png")
# file_to_url 失败时用 image_to_base64 / fallback to image_to_base64
# image_url = image_to_base64("test/screenshot.png")
# image_to_base64 已返回完整 data URL,直接使用,禁止再拼接前缀
# image_to_base64 returns a complete data URL — use it directly, never prepend "data:...;base64," again
response = client.chat.completions.create(
model="<视觉模型ID>",
messages=[{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": image_url}},
{"type": "text", "text": "描述这张图片的内容"},
],
}],
extra_body={"thinking": {"type": "disabled"}},
)
print(response.choices[0].message.content)
""",
script_path="temp_vision.py",
timeout=120,
)client.chat.completions.create()| Parameter | Type | Required | Description |
|---|---|---|---|
| | Yes | <!--zh 模型 ID,使用第一步查询到的真实 ID --> Model ID — use a real ID from Step 1 |
| | Yes | <!--zh 对话消息列表,每项含 `role` 和 `content` --> List of messages, each with |
| | No | <!--zh 采样温度,0~2,默认 1 --> Sampling temperature, 0~2, default 1 |
| | No | <!--zh 最大输出 token 数 --> Maximum output tokens |
| | No | <!--zh 工具定义列表(Function Calling) --> Tool definitions (Function Calling) |
| | No | <!--zh 扩展参数,用于传递 OpenAI SDK 未封装的原生字段,如 `thinking` --> Extra fields not natively supported by the OpenAI SDK, e.g. |
thinkingthinkingextra_bodydisabled | Description |
|---|---|
| Force disable deep thinking — model will not output chain-of-thought (recommended default) |
| Force enable deep thinking — model always outputs chain-of-thought |
| Model decides on its own whether to use deep thinking |
Note: Theparameter only applies to models that support deep thinking (e.g. doubao-seed series). Passing it to unsupported models may cause errors — check whether the target model supports this parameter before using it.thinking
# 关闭思考(推荐默认)/ disable thinking (recommended default)
extra_body={"thinking": {"type": "disabled"}}
# 开启思考 / enable thinking
extra_body={"thinking": {"type": "enabled"}}
# 模型自行判断 / let model decide
extra_body={"thinking": {"type": "auto"}}client.chat.completions.create()ChatCompletionresponse.choices[0].message.content # 文本回复 / text reply
response.choices[0].message.tool_calls # 工具调用列表 / tool calls (Function Calling)
response.choices[0].finish_reason # stop / tool_calls / length
response.usage.total_tokens # 总 token 数 / total tokens used
# 仅当 thinking.type 为 enabled 或 auto(模型决定开启)时存在
# Only present when thinking.type is "enabled" or "auto" (and model decides to think)
response.choices[0].message.reasoning_content # 思维链内容 / chain-of-thought content
response.usage.completion_tokens_details # 含 reasoning_tokens 字段 / contains reasoning_tokensNote:is a non-standard field and is not automatically parsed by the OpenAI SDK as an attribute. Access it as follows:reasoning_content
# 方式一:通过 model_extra 读取 / Option 1: via model_extra
reasoning = response.choices[0].message.model_extra.get("reasoning_content")
# 方式二:转为 dict 读取 / Option 2: convert to dict
import json
msg_dict = json.loads(response.choices[0].message.model_dump_json())
reasoning = msg_dict.get("reasoning_content")