gemini-3-advanced
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGemini 3 Pro Advanced Features
Gemini 3 Pro 高级功能
Comprehensive guide for advanced Gemini 3 Pro capabilities including function calling, built-in tools, structured outputs, context caching, batch processing, and framework integrations.
本文是Gemini 3 Pro高级功能的全面指南,涵盖函数调用、内置工具、结构化输出、上下文缓存、批量处理以及框架集成等内容。
Overview
概述
This skill covers production-ready advanced features that extend Gemini 3 Pro's capabilities beyond basic text generation.
本技能介绍了可用于生产环境的高级功能,能将Gemini 3 Pro的能力扩展到基础文本生成之外。
Key Capabilities
核心能力
- Function Calling: Custom tool integration with OpenAPI 3.0
- Built-in Tools: Google Search, Code Execution, File Search, URL Context
- Structured Outputs: Guaranteed JSON structure with Pydantic/Zod
- Thought Signatures: Managing multi-turn reasoning context
- Context Caching: Reuse large contexts (>2k tokens) for cost savings
- Batch Processing: Async processing at scale
- Framework Integration: LangChain, Vercel AI, Pydantic AI, CrewAI
- 函数调用(Function Calling):通过OpenAPI 3.0集成自定义工具
- 内置工具:Google搜索、代码执行、文件搜索、URL上下文
- 结构化输出:借助Pydantic/Zod确保JSON结构的一致性
- 思维签名(Thought Signatures):管理多轮推理上下文
- 上下文缓存:复用大上下文(>2k tokens)以降低成本
- 批量处理:大规模异步处理请求
- 框架集成:LangChain、Vercel AI、Pydantic AI、CrewAI
When to Use This Skill
适用场景
- Implementing custom tools/functions
- Enabling Google Search grounding
- Executing code safely
- Requiring structured JSON output
- Optimizing costs with caching
- Batch processing requests
- Building production applications
- Integrating with AI frameworks
- 实现自定义工具/函数
- 启用Google搜索 grounding
- 安全执行代码
- 需要结构化JSON输出
- 通过缓存优化成本
- 批量处理请求
- 构建生产级应用
- 与AI框架集成
Quick Start
快速入门
Function Calling Quick Start
函数调用快速入门
python
import google.generativeai as genai
genai.configure(api_key="YOUR_API_KEY")python
import google.generativeai as genai
genai.configure(api_key="YOUR_API_KEY")Define function
Define function
def get_weather(location: str) -> dict:
return {"location": location, "temp": 72, "condition": "sunny"}
def get_weather(location: str) -> dict:
return {"location": location, "temp": 72, "condition": "sunny"}
Declare function to model
Declare function to model
weather_func = genai.protos.FunctionDeclaration(
name="get_weather",
description="Get current weather for a location",
parameters={
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
)
model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[genai.protos.Tool(function_declarations=[weather_func])]
)
weather_func = genai.protos.FunctionDeclaration(
name="get_weather",
description="Get current weather for a location",
parameters={
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
)
model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[genai.protos.Tool(function_declarations=[weather_func])]
)
Use function
Use function
response = model.generate_content("What's the weather in San Francisco?")
response = model.generate_content("What's the weather in San Francisco?")
Handle function call
Handle function call
if response.parts[0].function_call:
fc = response.parts[0].function_call
result = get_weather(**dict(fc.args))
# Send result back
response = model.generate_content([
{"role": "model", "parts": [response.parts[0]]},
{"role": "user", "parts": [genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name=fc.name,
response=result
)
)]}
])print(response.text)
---if response.parts[0].function_call:
fc = response.parts[0].function_call
result = get_weather(**dict(fc.args))
# Send result back
response = model.generate_content([
{"role": "model", "parts": [response.parts[0]]},
{"role": "user", "parts": [genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name=fc.name,
response=result
)
)]}
])print(response.text)
---Core Tasks
核心任务
Task 1: Implement Function Calling
任务1:实现函数调用
Goal: Create custom tools that the model can call.
Python Example:
python
import google.generativeai as genai
from datetime import datetime
genai.configure(api_key="YOUR_API_KEY")目标:创建模型可调用的自定义工具。
Python示例:
python
import google.generativeai as genai
from datetime import datetime
genai.configure(api_key="YOUR_API_KEY")Define Python functions
Define Python functions
def get_current_time() -> str:
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
def calculate(operation: str, a: float, b: float) -> float:
ops = {
"add": lambda x, y: x + y,
"subtract": lambda x, y: x - y,
"multiply": lambda x, y: x * y,
"divide": lambda x, y: x / y if y != 0 else "Error: Division by zero"
}
return ops.get(operation, lambda x, y: "Unknown operation")(a, b)
def get_current_time() -> str:
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
def calculate(operation: str, a: float, b: float) -> float:
ops = {
"add": lambda x, y: x + y,
"subtract": lambda x, y: x - y,
"multiply": lambda x, y: x * y,
"divide": lambda x, y: x / y if y != 0 else "Error: Division by zero"
}
return ops.get(operation, lambda x, y: "Unknown operation")(a, b)
Declare functions to model (OpenAPI 3.0 format)
Declare functions to model (OpenAPI 3.0 format)
time_func = genai.protos.FunctionDeclaration(
name="get_current_time",
description="Get the current date and time",
parameters={"type": "object", "properties": {}}
)
calc_func = genai.protos.FunctionDeclaration(
name="calculate",
description="Perform basic arithmetic operations",
parameters={
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": ["add", "subtract", "multiply", "divide"],
"description": "The operation to perform"
},
"a": {"type": "number", "description": "First number"},
"b": {"type": "number", "description": "Second number"}
},
"required": ["operation", "a", "b"]
}
)
time_func = genai.protos.FunctionDeclaration(
name="get_current_time",
description="Get the current date and time",
parameters={"type": "object", "properties": {}}
)
calc_func = genai.protos.FunctionDeclaration(
name="calculate",
description="Perform basic arithmetic operations",
parameters={
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": ["add", "subtract", "multiply", "divide"],
"description": "The operation to perform"
},
"a": {"type": "number", "description": "First number"},
"b": {"type": "number", "description": "Second number"}
},
"required": ["operation", "a", "b"]
}
)
Create model with tools
Create model with tools
model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[genai.protos.Tool(function_declarations=[time_func, calc_func])]
)
model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[genai.protos.Tool(function_declarations=[time_func, calc_func])]
)
Use tools
Use tools
chat = model.start_chat()
response = chat.send_message("What time is it? Also calculate 15 * 8")
chat = model.start_chat()
response = chat.send_message("What time is it? Also calculate 15 * 8")
Process function calls
Process function calls
function_registry = {
"get_current_time": get_current_time,
"calculate": calculate
}
while response.parts[0].function_call:
fc = response.parts[0].function_call
func = function_registry[fc.name]
result = func(**dict(fc.args))
response = chat.send_message(genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name=fc.name,
response={"result": result}
)
))print(response.text)
**See:** `references/function-calling.md` for comprehensive guide
---function_registry = {
"get_current_time": get_current_time,
"calculate": calculate
}
while response.parts[0].function_call:
fc = response.parts[0].function_call
func = function_registry[fc.name]
result = func(**dict(fc.args))
response = chat.send_message(genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name=fc.name,
response={"result": result}
)
))print(response.text)
**参考**:`references/function-calling.md` 获取完整指南
---Task 2: Use Built-in Tools
任务2:使用内置工具
Goal: Enable Google Search, Code Execution, and other built-in tools.
Google Search Grounding:
python
undefined目标:启用Google搜索、代码执行等内置工具。
Google搜索Grounding:
python
undefinedEnable Google Search
Enable Google Search
model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[{"google_search_retrieval": {}}]
)
response = model.generate_content("What are the latest developments in quantum computing?")
model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[{"google_search_retrieval": {}}]
)
response = model.generate_content("What are the latest developments in quantum computing?")
Check grounding metadata
Check grounding metadata
if hasattr(response, 'grounding_metadata'):
print(f"Search sources used: {len(response.grounding_metadata.grounding_chunks)}")
print(response.text)
**Code Execution:**
```pythonif hasattr(response, 'grounding_metadata'):
print(f"Search sources used: {len(response.grounding_metadata.grounding_chunks)}")
print(response.text)
**代码执行**:
```pythonEnable code execution
Enable code execution
model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[{"code_execution": {}}]
)
response = model.generate_content(
"Calculate the first 20 Fibonacci numbers and show the results"
)
print(response.text)
**See:** `references/built-in-tools.md` for all tools
---model = genai.GenerativeModel(
"gemini-3-pro-preview",
tools=[{"code_execution": {}}]
)
response = model.generate_content(
"Calculate the first 20 Fibonacci numbers and show the results"
)
print(response.text)
**参考**:`references/built-in-tools.md` 查看所有工具
---Task 3: Implement Structured Outputs
任务3:实现结构化输出
Goal: Get guaranteed JSON structure from model.
Python with Pydantic:
python
import google.generativeai as genai
from pydantic import BaseModel
from typing import List
genai.configure(api_key="YOUR_API_KEY")目标:从模型获取格式可靠的JSON输出。
Python + Pydantic示例:
python
import google.generativeai as genai
from pydantic import BaseModel
from typing import List
genai.configure(api_key="YOUR_API_KEY")Define schema
Define schema
class Movie(BaseModel):
title: str
director: str
year: int
genre: List[str]
rating: float
class MovieList(BaseModel):
movies: List[Movie]
class Movie(BaseModel):
title: str
director: str
year: int
genre: List[str]
rating: float
class MovieList(BaseModel):
movies: List[Movie]
Configure model for structured output
Configure model for structured output
model = genai.GenerativeModel(
"gemini-3-pro-preview",
generation_config={
"response_mime_type": "application/json",
"response_schema": MovieList
}
)
response = model.generate_content(
"List 3 classic science fiction movies"
)
model = genai.GenerativeModel(
"gemini-3-pro-preview",
generation_config={
"response_mime_type": "application/json",
"response_schema": MovieList
}
)
response = model.generate_content(
"List 3 classic science fiction movies"
)
Parse structured output
Parse structured output
import json
data = json.loads(response.text)
movies = MovieList(**data)
for movie in movies.movies:
print(f"{movie.title} ({movie.year}) - Rating: {movie.rating}")
**TypeScript with Zod:**
```typescript
import { GoogleGenerativeAI } from "@google/generative-ai";
import { z } from "zod";
const MovieSchema = z.object({
title: z.string(),
director: z.string(),
year: z.number(),
genre: z.array(z.string()),
rating: z.number()
});
const MovieListSchema = z.object({
movies: z.array(MovieSchema)
});
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
const model = genAI.getGenerativeModel({
model: "gemini-3-pro-preview",
generationConfig: {
responseMimeType: "application/json",
responseSchema: MovieListSchema
}
});
const result = await model.generateContent("List 3 classic science fiction movies");
const movies = MovieListSchema.parse(JSON.parse(result.response.text()));
console.log(movies);See: for advanced patterns
references/structured-outputs.mdimport json
data = json.loads(response.text)
movies = MovieList(**data)
for movie in movies.movies:
print(f"{movie.title} ({movie.year}) - Rating: {movie.rating}")
**TypeScript + Zod示例**:
```typescript
import { GoogleGenerativeAI } from "@google/generative-ai";
import { z } from "zod";
const MovieSchema = z.object({
title: z.string(),
director: z.string(),
year: z.number(),
genre: z.array(z.string()),
rating: z.number()
});
const MovieListSchema = z.object({
movies: z.array(MovieSchema)
});
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
const model = genAI.getGenerativeModel({
model: "gemini-3-pro-preview",
generationConfig: {
responseMimeType: "application/json",
responseSchema: MovieListSchema
}
});
const result = await model.generateContent("List 3 classic science fiction movies");
const movies = MovieListSchema.parse(JSON.parse(result.response.text()));
console.log(movies);参考: 了解高级模式
references/structured-outputs.mdTask 4: Setup Context Caching
任务4:设置上下文缓存
Goal: Reuse large contexts (>2k tokens) for cost savings.
Python Example:
python
import google.generativeai as genai
from pathlib import Path
genai.configure(api_key="YOUR_API_KEY")目标:复用大上下文(>2k tokens)以降低成本。
Python示例:
python
import google.generativeai as genai
from pathlib import Path
genai.configure(api_key="YOUR_API_KEY")Load large document
Load large document
large_doc = Path("codebase.txt").read_text() # Must be >2048 tokens
large_doc = Path("codebase.txt").read_text() # Must be >2048 tokens
Create cached content
Create cached content
cached_content = genai.caching.CachedContent.create(
model="gemini-3-pro-preview",
system_instruction="You are a code reviewer",
contents=[large_doc]
)
cached_content = genai.caching.CachedContent.create(
model="gemini-3-pro-preview",
system_instruction="You are a code reviewer",
contents=[large_doc]
)
Use cached content
Use cached content
model = genai.GenerativeModel.from_cached_content(cached_content)
model = genai.GenerativeModel.from_cached_content(cached_content)
Multiple queries using same cached context
Multiple queries using same cached context
response1 = model.generate_content("Find all security vulnerabilities")
response2 = model.generate_content("Suggest performance improvements")
response3 = model.generate_content("Check for code duplication")
response1 = model.generate_content("Find all security vulnerabilities")
response2 = model.generate_content("Suggest performance improvements")
response3 = model.generate_content("Check for code duplication")
Cost savings: cached tokens are 90% cheaper
Cost savings: cached tokens are 90% cheaper
print(f"Cache name: {cached_content.name}")
print(f"Cache name: {cached_content.name}")
Clean up cache when done
Clean up cache when done
cached_content.delete()
**Cost Comparison:**
| Context Size | Without Cache | With Cache | Savings |
|-------------|---------------|------------|---------|
| 100k tokens × 10 queries | $2.00 | $0.22 | 89% |
| 500k tokens × 50 queries | $50.00 | $5.50 | 89% |
**See:** `references/context-caching.md` for comprehensive guide
---cached_content.delete()
**成本对比**:
| 上下文大小 | 无缓存 | 有缓存 | 节省比例 |
|-------------|---------------|------------|---------|
| 100k tokens × 10次请求 | $2.00 | $0.22 | 89% |
| 500k tokens × 50次请求 | $50.00 | $5.50 | 89% |
**参考**:`references/context-caching.md` 获取完整指南
---Task 5: Implement Batch Processing
任务5:实现批量处理
Goal: Process multiple requests asynchronously.
Python Example:
python
import google.generativeai as genai
genai.configure(api_key="YOUR_API_KEY")
model = genai.GenerativeModel("gemini-3-pro-preview")目标:异步处理多个请求。
Python示例:
python
import google.generativeai as genai
genai.configure(api_key="YOUR_API_KEY")
model = genai.GenerativeModel("gemini-3-pro-preview")Prepare batch requests
Prepare batch requests
prompts = [
"Summarize the benefits of AI",
"Explain quantum computing",
"Describe blockchain technology",
"What is machine learning?"
]
prompts = [
"Summarize the benefits of AI",
"Explain quantum computing",
"Describe blockchain technology",
"What is machine learning?"
]
Process in batch
Process in batch
import asyncio
async def generate_async(prompt):
response = model.generate_content(prompt)
return {"prompt": prompt, "response": response.text}
async def batch_process(prompts):
tasks = [generate_async(p) for p in prompts]
results = await asyncio.gather(*tasks)
return results
import asyncio
async def generate_async(prompt):
response = model.generate_content(prompt)
return {"prompt": prompt, "response": response.text}
async def batch_process(prompts):
tasks = [generate_async(p) for p in prompts]
results = await asyncio.gather(*tasks)
return results
Run batch
Run batch
results = asyncio.run(batch_process(prompts))
for result in results:
print(f"Q: {result['prompt']}")
print(f"A: {result['response']}\n")
**See:** `references/batch-processing.md` for advanced patterns
---results = asyncio.run(batch_process(prompts))
for result in results:
print(f"Q: {result['prompt']}")
print(f"A: {result['response']}\n")
**参考**:`references/batch-processing.md` 了解高级模式
---Task 6: Manage Thought Signatures
任务6:管理思维签名
Goal: Handle thought signatures in complex multi-turn scenarios.
Key Points:
- Standard Chat: SDKs handle automatically
- Function Calls: Must return signatures in sequential order
- Parallel Calls: Only first call contains signature
- Image Editing: Required on first part and all subsequent parts
Example with Function Calls:
python
undefined目标:在复杂多轮对话场景中处理思维签名。
关键点:
- 标准对话:SDK会自动处理
- 函数调用:必须按顺序返回签名
- 并行调用:仅首次调用包含签名
- 图片编辑:首次及后续所有部分都需要签名
函数调用示例:
python
undefinedWhen handling function calls, preserve signatures
When handling function calls, preserve signatures
response = chat.send_message("Use these tools...")
function_calls = []
signatures = []
for part in response.parts:
if part.function_call:
function_calls.append(part.function_call)
if hasattr(part, 'thought_signature'):
signatures.append(part.thought_signature)
response = chat.send_message("Use these tools...")
function_calls = []
signatures = []
for part in response.parts:
if part.function_call:
function_calls.append(part.function_call)
if hasattr(part, 'thought_signature'):
signatures.append(part.thought_signature)
Execute functions
Execute functions
results = [execute_function(fc) for fc in function_calls]
results = [execute_function(fc) for fc in function_calls]
Return results with signatures in order
Return results with signatures in order
response_parts = []
for i, result in enumerate(results):
part = genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name=function_calls[i].name,
response=result
)
)
if i < len(signatures):
part.thought_signature = signatures[i]
response_parts.append(part)
response = chat.send_message(response_parts)
**Bypass Validation (when needed):**
```pythonresponse_parts = []
for i, result in enumerate(results):
part = genai.protos.Part(
function_response=genai.protos.FunctionResponse(
name=function_calls[i].name,
response=result
)
)
if i < len(signatures):
part.thought_signature = signatures[i]
response_parts.append(part)
response = chat.send_message(response_parts)
**绕过验证(必要时)**:
```pythonUse bypass string for migration/testing
Use bypass string for migration/testing
bypass_signature = "context_engineering_is_the_way_to_go"
**See:** `references/thought-signatures.md` for complete guide
---bypass_signature = "context_engineering_is_the_way_to_go"
**参考**:`references/thought-signatures.md` 获取完整指南
---Task 7: Integrate with Frameworks
任务7:框架集成
Goal: Use Gemini 3 Pro with popular AI frameworks.
LangChain:
python
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(
model="gemini-3-pro-preview",
google_api_key="YOUR_API_KEY"
)
response = llm.invoke("Explain neural networks")
print(response.content)Vercel AI SDK:
typescript
import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { generateText } from 'ai';
const google = createGoogleGenerativeAI({
apiKey: process.env.GEMINI_API_KEY
});
const { text } = await generateText({
model: google('gemini-3-pro-preview'),
prompt: 'Explain neural networks'
});
console.log(text);Pydantic AI:
python
from pydantic_ai import Agent
agent = Agent(
'google-genai:gemini-3-pro-preview',
system_prompt='You are a helpful AI assistant'
)
result = agent.run_sync('Explain neural networks')
print(result.data)See: for all frameworks
references/framework-integration.md目标:将Gemini 3 Pro与主流AI框架结合使用。
LangChain示例:
python
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(
model="gemini-3-pro-preview",
google_api_key="YOUR_API_KEY"
)
response = llm.invoke("Explain neural networks")
print(response.content)Vercel AI SDK示例:
typescript
import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { generateText } from 'ai';
const google = createGoogleGenerativeAI({
apiKey: process.env.GEMINI_API_KEY
});
const { text } = await generateText({
model: google('gemini-3-pro-preview'),
prompt: 'Explain neural networks'
});
console.log(text);Pydantic AI示例:
python
from pydantic_ai import Agent
agent = Agent(
'google-genai:gemini-3-pro-preview',
system_prompt='You are a helpful AI assistant'
)
result = agent.run_sync('Explain neural networks')
print(result.data)参考: 查看所有框架
references/framework-integration.mdProduction Best Practices
生产环境最佳实践
1. Error Handling
1. 错误处理
python
from google.api_core import exceptions, retry
@retry.Retry(
predicate=retry.if_exception_type(
exceptions.ResourceExhausted,
exceptions.ServiceUnavailable
)
)
def safe_generate(prompt):
try:
return model.generate_content(prompt)
except exceptions.InvalidArgument as e:
logger.error(f"Invalid argument: {e}")
raise
except Exception as e:
logger.error(f"Unexpected error: {e}")
raisepython
from google.api_core import exceptions, retry
@retry.Retry(
predicate=retry.if_exception_type(
exceptions.ResourceExhausted,
exceptions.ServiceUnavailable
)
)
def safe_generate(prompt):
try:
return model.generate_content(prompt)
except exceptions.InvalidArgument as e:
logger.error(f"Invalid argument: {e}")
raise
except Exception as e:
logger.error(f"Unexpected error: {e}")
raise2. Rate Limiting
2. 速率限制
python
import time
from collections import deque
class RateLimiter:
def __init__(self, max_rpm=60):
self.max_rpm = max_rpm
self.requests = deque()
def wait_if_needed(self):
now = time.time()
self.requests = deque([t for t in self.requests if t > now - 60])
if len(self.requests) >= self.max_rpm:
sleep_time = 60 - (now - self.requests[0])
time.sleep(max(0, sleep_time))
self.requests.append(now)python
import time
from collections import deque
class RateLimiter:
def __init__(self, max_rpm=60):
self.max_rpm = max_rpm
self.requests = deque()
def wait_if_needed(self):
now = time.time()
self.requests = deque([t for t in self.requests if t > now - 60])
if len(self.requests) >= self.max_rpm:
sleep_time = 60 - (now - self.requests[0])
time.sleep(max(0, sleep_time))
self.requests.append(now)3. Cost Monitoring
3. 成本监控
python
class CostTracker:
def __init__(self):
self.total_cost = 0
def track(self, response):
usage = response.usage_metadata
input_cost = (usage.prompt_token_count / 1_000_000) * 2.00
output_cost = (usage.candidates_token_count / 1_000_000) * 12.00
cost = input_cost + output_cost
self.total_cost += cost
return {
"input_tokens": usage.prompt_token_count,
"output_tokens": usage.candidates_token_count,
"cost": cost,
"total_cost": self.total_cost
}python
class CostTracker:
def __init__(self):
self.total_cost = 0
def track(self, response):
usage = response.usage_metadata
input_cost = (usage.prompt_token_count / 1_000_000) * 2.00
output_cost = (usage.candidates_token_count / 1_000_000) * 12.00
cost = input_cost + output_cost
self.total_cost += cost
return {
"input_tokens": usage.prompt_token_count,
"output_tokens": usage.candidates_token_count,
"cost": cost,
"total_cost": self.total_cost
}References
参考资料
Core Features
- Function Calling - Custom tool integration
- Built-in Tools - Google Search, Code Execution, etc.
- Structured Outputs - JSON schema with Pydantic/Zod
- Thought Signatures - Managing reasoning context
- Context Caching - Cost optimization with caching
- Batch Processing - Async and batch API
Integration
- Framework Integration - LangChain, Vercel AI, etc.
- Production Guide - Deployment best practices
Scripts
- Function Calling Script - Tool integration example
- Tools Script - Built-in tools demonstration
- Structured Output Script - JSON schema example
- Caching Script - Context caching implementation
- Batch Script - Batch processing example
Official Resources
Related Skills
相关技能
- gemini-3-pro-api - Basic setup, authentication, text generation
- gemini-3-multimodal - Media processing (images, video, audio)
- gemini-3-image-generation - Image generation
- gemini-3-pro-api - 基础设置、身份验证、文本生成
- gemini-3-multimodal - 媒体处理(图片、视频、音频)
- gemini-3-image-generation - 图片生成
Summary
总结
This skill provides advanced production features:
✅ Function calling with custom tools
✅ Built-in tools (Search, Code Exec, etc.)
✅ Structured JSON outputs
✅ Thought signature management
✅ Context caching for cost savings
✅ Batch processing at scale
✅ Framework integrations
✅ Production-ready patterns
Ready for advanced features? Start with the task that matches your use case above!
本技能提供以下生产级高级功能:
✅ 自定义工具的函数调用
✅ 内置工具(搜索、代码执行等)
✅ 结构化JSON输出
✅ 思维签名管理
✅ 上下文缓存降本
✅ 大规模批量处理
✅ 框架集成
✅ 生产级实现模式
准备好使用高级功能了吗? 从符合你使用场景的任务开始吧!