learning-sdk-integration

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Learning SDK Integration

Learning SDK 集成

Overview

概述

This skill provides universal patterns for adding persistent memory to LLM agents using the Learning SDK through a 3-line integration pattern that works with OpenAI, Anthropic, Gemini, and other LLM providers.
本技能提供了通用模式,可通过3行代码集成的方式,借助Learning SDK为LLM Agent添加持久化记忆,该模式支持OpenAI、Anthropic、Gemini等各类LLM提供商。

When to Use

适用场景

Use this skill when:
  • Building LLM agents that need memory across sessions
  • Implementing conversation history persistence
  • Adding context-aware capabilities to existing agents
  • Creating multi-agent systems with shared memory
  • Working with any LLM provider (OpenAI, Anthropic, Gemini, etc.)
在以下场景中使用本技能:
  • 构建需要跨会话记忆的LLM Agent
  • 实现对话历史持久化
  • 为现有Agent添加上下文感知能力
  • 创建具备共享记忆的多Agent系统
  • 对接任意LLM提供商(OpenAI、Anthropic、Gemini等)

Core Integration Pattern

核心集成模式

Basic 3-Line Integration

基础3行代码集成

python
from agentic_learning import learning
python
from agentic_learning import learning

Wrap LLM SDK calls to enable memory

包装LLM SDK调用以启用记忆功能

with learning(agent="my-agent"): response = openai.chat.completions.create(...)
undefined
with learning(agent="my-agent"): response = openai.chat.completions.create(...)
undefined

Async Integration

异步集成

python
from agentic_learning import learning_async
python
from agentic_learning import learning_async

For async LLM SDK usage

适用于异步LLM SDK调用场景

async with learning_async(agent="my-agent"): response = await claude.messages.create(...)
undefined
async with learning_async(agent="my-agent"): response = await claude.messages.create(...)
undefined

Provider-Specific Examples

各提供商示例

OpenAI Integration

OpenAI 集成

python
from openai import OpenAI
from agentic_learning import learning_async

class MemoryEnhancedOpenAIAgent:
    def __init__(self, api_key: str, agent_name: str):
        self.client = OpenAI(api_key=api_key)
        self.agent_name = agent_name
    
    async def chat(self, message: str, model: str = "gpt-4"):
        async with learning_async(agent=self.agent_name):
            response = await self.client.chat.completions.create(
                model=model,
                messages=[{"role": "user", "content": message}]
            )
            return response.choices[0].message.content
python
from openai import OpenAI
from agentic_learning import learning_async

class MemoryEnhancedOpenAIAgent:
    def __init__(self, api_key: str, agent_name: str):
        self.client = OpenAI(api_key=api_key)
        self.agent_name = agent_name
    
    async def chat(self, message: str, model: str = "gpt-4"):
        async with learning_async(agent=self.agent_name):
            response = await self.client.chat.completions.create(
                model=model,
                messages=[{"role": "user", "content": message}]
            )
            return response.choices[0].message.content

Claude Integration

Claude 集成

python
from anthropic import Anthropic
from agentic_learning import learning_async

class MemoryEnhancedClaudeAgent:
    def __init__(self, api_key: str, agent_name: str):
        self.client = Anthropic(api_key=api_key)
        self.agent_name = agent_name
    
    async def chat(self, message: str, model: str = "claude-3-5-sonnet-20241022"):
        async with learning_async(agent=self.agent_name):
            response = await self.client.messages.create(
                model=model,
                max_tokens=1000,
                messages=[{"role": "user", "content": message}]
            )
            return response.content[0].text
python
from anthropic import Anthropic
from agentic_learning import learning_async

class MemoryEnhancedClaudeAgent:
    def __init__(self, api_key: str, agent_name: str):
        self.client = Anthropic(api_key=api_key)
        self.agent_name = agent_name
    
    async def chat(self, message: str, model: str = "claude-3-5-sonnet-20241022"):
        async with learning_async(agent=self.agent_name):
            response = await self.client.messages.create(
                model=model,
                max_tokens=1000,
                messages=[{"role": "user", "content": message}]
            )
            return response.content[0].text

Gemini Integration

Gemini 集成

python
import google.generativeai as genai
from agentic_learning import learning_async

class MemoryEnhancedGeminiAgent:
    def __init__(self, api_key: str, agent_name: str):
        genai.configure(api_key=api_key)
        self.model = genai.GenerativeModel('gemini-pro')
        self.agent_name = agent_name
    
    async def chat(self, message: str):
        async with learning_async(agent=self.agent_name):
            response = await self.model.generate_content_async(message)
            return response.text
python
import google.generativeai as genai
from agentic_learning import learning_async

class MemoryEnhancedGeminiAgent:
    def __init__(self, api_key: str, agent_name: str):
        genai.configure(api_key=api_key)
        self.model = genai.GenerativeModel('gemini-pro')
        self.agent_name = agent_name
    
    async def chat(self, message: str):
        async with learning_async(agent=self.agent_name):
            response = await self.model.generate_content_async(message)
            return response.text

PydanticAI Integration

PydanticAI 集成

python
from pydantic_ai import Agent
from agentic_learning import learning

agent = Agent('anthropic:claude-sonnet-4-20250514')

with learning(agent="pydantic-demo"):
    result = agent.run_sync("Hello!")
For detailed patterns including structured output, tool usage, and async examples, see
references/pydantic-ai.md
.
python
from pydantic_ai import Agent
from agentic_learning import learning

agent = Agent('anthropic:claude-sonnet-4-20250514')

with learning(agent="pydantic-demo"):
    result = agent.run_sync("Hello!")
如需包含结构化输出、工具调用及异步示例的详细模式,请查看
references/pydantic-ai.md

Advanced Patterns

进阶模式

Memory-Only Mode (Capture Without Injection)

仅记忆模式(仅捕获不注入)

python
undefined
python
undefined

Use capture_only=True to save conversations without memory injection

使用capture_only=True保存对话但不注入记忆

async with learning_async(agent="research-agent", capture_only=True): # Conversation will be saved but no memory will be retrieved/injected response = await llm_call(...)
undefined
async with learning_async(agent="research-agent", capture_only=True): # 对话将被保存,但不会检索/注入记忆 response = await llm_call(...)
undefined

Custom Memory Blocks

自定义记忆块

python
undefined
python
undefined

Define custom memory blocks for specific context

为特定上下文定义自定义记忆块

custom_memory = [ {"label": "project_context", "description": "Current project details"}, {"label": "user_preferences", "description": "User's working preferences"} ]
async with learning_async(agent="my-agent", memory=custom_memory): response = await llm_call(...)
undefined
custom_memory = [ {"label": "project_context", "description": "Current project details"}, {"label": "user_preferences", "description": "User's working preferences"} ]
async with learning_async(agent="my-agent", memory=custom_memory): response = await llm_call(...)
undefined

Multi-Agent Memory Sharing

多Agent记忆共享

python
undefined
python
undefined

Multiple agents can share memory by using the same agent name

多个Agent使用相同名称即可共享记忆

agent1 = MemoryEnhancedOpenAIAgent(api_key, "shared-agent") agent2 = MemoryEnhancedClaudeAgent(api_key, "shared-agent")
agent1 = MemoryEnhancedOpenAIAgent(api_key, "shared-agent") agent2 = MemoryEnhancedClaudeAgent(api_key, "shared-agent")

Both agents will access the same memory context

两个Agent将访问相同的记忆上下文

response1 = await agent1.chat("Research topic X") response2 = await agent2.chat("Summarize our research")
undefined
response1 = await agent1.chat("Research topic X") response2 = await agent2.chat("Summarize our research")
undefined

Context-Aware Tool Selection

上下文感知工具选择

python
async def context_aware_tool_use():
    async with learning_async(agent="tool-selector"):
        # Memory will help agent choose appropriate tools
        memories = await get_memories("tool-selector")
        
        if "web_search_needed" in str(memories):
            return use_web_search()
        elif "data_analysis" in str(memories):
            return use_data_tools()
        else:
            return use_default_tools()
python
async def context_aware_tool_use():
    async with learning_async(agent="tool-selector"):
        # 记忆将帮助Agent选择合适的工具
        memories = await get_memories("tool-selector")
        
        if "web_search_needed" in str(memories):
            return use_web_search()
        elif "data_analysis" in str(memories):
            return use_data_tools()
        else:
            return use_default_tools()

Best Practices

最佳实践

1. Agent Naming

1. Agent命名

  • Use descriptive agent names that reflect their purpose
  • For related functionality, use consistent naming patterns
  • Example:
    email-processor
    ,
    research-assistant
    ,
    code-reviewer
  • 使用能反映其用途的描述性名称
  • 相关功能使用统一的命名模式
  • 示例:
    email-processor
    ,
    research-assistant
    ,
    code-reviewer

2. Memory Structure

2. 记忆结构

python
undefined
python
undefined

Good: Specific, purposeful memory blocks

推荐:明确、有针对性的记忆块

memory_blocks = [ {"label": "conversation_history", "description": "Recent conversation context"}, {"label": "task_context", "description": "Current task and goals"}, {"label": "user_preferences", "description": "User interaction preferences"} ]
undefined
memory_blocks = [ {"label": "conversation_history", "description": "Recent conversation context"}, {"label": "task_context", "description": "Current task and goals"}, {"label": "user_preferences", "description": "User interaction preferences"} ]
undefined

3. Error Handling

3. 错误处理

python
async def robust_llm_call(message: str):
    try:
        async with learning_async(agent="my-agent"):
            return await llm_sdk_call(...)
    except Exception as e:
        # Fallback without memory if learning fails
        return await llm_sdk_call(...)
python
async def robust_llm_call(message: str):
    try:
        async with learning_async(agent="my-agent"):
            return await llm_sdk_call(...)
    except Exception as e:
        # 若Learning SDK调用失败,回退到无记忆模式
        return await llm_sdk_call(...)

4. Provider Selection Patterns

4. 提供商选择模式

python
def choose_provider(task_type: str, budget: str, latency_requirement: str):
    """Select LLM provider based on task requirements"""
    
    if task_type == "code_generation" and budget == "high":
        return "claude-3-5-sonnet"  # Best for code
    elif task_type == "general_chat" and budget == "low":
        return "gpt-3.5-turbo"  # Cost-effective
    elif latency_requirement == "ultra_low":
        return "gemini-1.5-flash"  # Fastest
    else:
        return "gpt-4"  # Good all-rounder
python
def choose_provider(task_type: str, budget: str, latency_requirement: str):
    """根据任务需求选择LLM提供商"""
    
    if task_type == "code_generation" and budget == "high":
        return "claude-3-5-sonnet"  # 代码生成最优选择
    elif task_type == "general_chat" and budget == "low":
        return "gpt-3.5-turbo"  # 高性价比选择
    elif latency_requirement == "ultra_low":
        return "gemini-1.5-flash"  # 最快响应
    else:
        return "gpt-4"  # 全能型选择

Memory Management

记忆管理

Retrieving Conversation History

检索对话历史

python
from agentic_learning import AsyncAgenticLearning

async def get_conversation_context(agent_name: str):
    client = AsyncAgenticLearning()
    memories = await client.get_memories(agent_name)
    return memories
python
from agentic_learning import AsyncAgenticLearning

async def get_conversation_context(agent_name: str):
    client = AsyncAgenticLearning()
    memories = await client.get_memories(agent_name)
    return memories

Clearing Memory

清除记忆

python
undefined
python
undefined

When starting fresh contexts

当需要全新上下文时使用

client = AsyncAgenticLearning() await client.clear_memory(agent_name)
undefined
client = AsyncAgenticLearning() await client.clear_memory(agent_name)
undefined

Integration Examples

集成示例

Universal Research Agent

通用研究Agent

python
class UniversalResearchAgent:
    def __init__(self, provider: str, api_key: str):
        self.provider = provider
        self.client = self._initialize_client(provider, api_key)
    
    def _initialize_client(self, provider: str, api_key: str):
        if provider == "openai":
            from openai import OpenAI
            return OpenAI(api_key=api_key)
        elif provider == "claude":
            from anthropic import Anthropic
            return Anthropic(api_key=api_key)
        elif provider == "gemini":
            import google.generativeai as genai
            genai.configure(api_key=api_key)
            return genai.GenerativeModel('gemini-pro')
    
    async def research(self, topic: str):
        async with learning_async(
            agent="universal-researcher",
            memory=[
                {"label": "research_history", "description": "Previous research topics"},
                {"label": "current_session", "description": "Current research session"}
            ]
        ):
            prompt = f"Research the topic: {topic}. Consider previous research context."
            response = await self._make_llm_call(prompt)
            return response
python
class UniversalResearchAgent:
    def __init__(self, provider: str, api_key: str):
        self.provider = provider
        self.client = self._initialize_client(provider, api_key)
    
    def _initialize_client(self, provider: str, api_key: str):
        if provider == "openai":
            from openai import OpenAI
            return OpenAI(api_key=api_key)
        elif provider == "claude":
            from anthropic import Anthropic
            return Anthropic(api_key=api_key)
        elif provider == "gemini":
            import google.generativeai as genai
            genai.configure(api_key=api_key)
            return genai.GenerativeModel('gemini-pro')
    
    async def research(self, topic: str):
        async with learning_async(
            agent="universal-researcher",
            memory=[
                {"label": "research_history", "description": "Previous research topics"},
                {"label": "current_session", "description": "Current research session"}
            ]
        ):
            prompt = f"Research the topic: {topic}. Consider previous research context."
            response = await self._make_llm_call(prompt)
            return response

Multi-Provider Code Review Assistant

多提供商代码审查助手

python
class CodeReviewAssistant:
    def __init__(self, providers: dict):
        self.providers = providers
        self.clients = {name: self._init_client(name, key) 
                       for name, key in providers.items()}
    
    async def review_with_multiple_perspectives(self, code: str):
        reviews = {}
        
        for provider_name, client in self.clients.items():
            async with learning_async(
                agent=f"code-reviewer-{provider_name}",
                memory=[
                    {"label": "review_history", "description": "Past code reviews"},
                    {"label": "coding_standards", "description": "Project standards"}
                ]
            ):
                prompt = f"Review this code from {provider_name} perspective: {code}"
                reviews[provider_name] = await self._make_llm_call(client, prompt)
        
        # Synthesize multiple perspectives
        return await self._synthesize_reviews(reviews)
python
class CodeReviewAssistant:
    def __init__(self, providers: dict):
        self.providers = providers
        self.clients = {name: self._init_client(name, key) 
                       for name, key in providers.items()}
    
    async def review_with_multiple_perspectives(self, code: str):
        reviews = {}
        
        for provider_name, client in self.clients.items():
            async with learning_async(
                agent=f"code-reviewer-{provider_name}",
                memory=[
                    {"label": "review_history", "description": "Past code reviews"},
                    {"label": "coding_standards", "description": "Project standards"}
                ]
            ):
                prompt = f"Review this code from {provider_name} perspective: {code}"
                reviews[provider_name] = await self._make_llm_call(client, prompt)
        
        # 综合多视角结果
        return await self._synthesize_reviews(reviews)

Testing Integration

集成测试

Unit Test Pattern

单元测试模式

python
import pytest
from agentic_learning import learning_async

async def test_memory_integration():
    async with learning_async(agent="test-agent"):
        # Test that memory is working
        response = await llm_sdk_call("Remember this test")
        
        # Verify memory was captured
        client = AsyncAgenticLearning()
        memories = await client.get_memories("test-agent")
        assert len(memories) > 0

@pytest.mark.parametrize("provider", ["openai", "claude", "gemini"])
async def test_provider_memory_integration(provider):
    # Test memory works with each provider
    agent = create_agent(provider, api_key)
    response = await agent.chat("Test message")
    assert response is not None
python
import pytest
from agentic_learning import learning_async

async def test_memory_integration():
    async with learning_async(agent="test-agent"):
        # 测试记忆功能是否正常
        response = await llm_sdk_call("Remember this test")
        
        # 验证记忆是否被捕获
        client = AsyncAgenticLearning()
        memories = await client.get_memories("test-agent")
        assert len(memories) > 0

@pytest.mark.parametrize("provider", ["openai", "claude", "gemini"])
async def test_provider_memory_integration(provider):
    # 测试记忆功能在各提供商中的兼容性
    agent = create_agent(provider, api_key)
    response = await agent.chat("Test message")
    assert response is not None

Troubleshooting

故障排除

Common Issues

常见问题

  1. Memory not appearing: Ensure agent name is consistent across calls
  2. Performance issues: Use
    capture_only=True
    for logging-only scenarios
  3. Context overflow: Regularly clear memory for long-running sessions
  4. Async conflicts: Always use
    learning_async
    with async SDK calls
  5. Provider compatibility: Check SDK version compatibility with Agentic Learning SDK
  1. 记忆未生效:确保所有调用使用的Agent名称一致
  2. 性能问题:在仅需日志记录的场景下使用
    capture_only=True
  3. 上下文溢出:对于长会话定期清除记忆
  4. 异步冲突:异步SDK调用务必搭配
    learning_async
    使用
  5. 提供商兼容性:检查SDK版本与Agentic Learning SDK的兼容性

Debug Mode

调试模式

python
undefined
python
undefined

Enable debug logging to see memory operations

启用调试日志查看记忆操作

import logging logging.basicConfig(level=logging.DEBUG)
async with learning_async(agent="debug-agent"): # Memory operations will be logged response = await llm_sdk_call(...)
undefined
import logging logging.basicConfig(level=logging.DEBUG)
async with learning_async(agent="debug-agent"): # 记忆操作将被记录 response = await llm_sdk_call(...)
undefined

Provider-Specific Considerations

提供商特定注意事项

OpenAI

OpenAI

  • Works best with
    chat.completions
    endpoint
  • Supports both sync and async clients
  • Token counting available for cost tracking
  • chat.completions
    端点适配最佳
  • 支持同步和异步客户端
  • 支持令牌计数以追踪成本

Claude

Claude

  • Use
    messages
    endpoint for conversation
  • Handles long context well
  • Good for code and analysis tasks
  • 使用
    messages
    端点进行对话
  • 擅长处理长上下文
  • 适用于代码与分析类任务

Gemini

Gemini

  • Use
    generate_content_async
    for async
  • Supports multimodal inputs
  • Fast response times
  • 异步场景使用
    generate_content_async
  • 支持多模态输入
  • 响应速度快

References

参考资料

Skill References

技能参考

  • references/pydantic-ai.md
    - PydanticAI integration patterns
  • references/mem0-migration.md
    - Migrating from mem0 to Learning SDK
  • references/pydantic-ai.md
    - PydanticAI集成模式
  • references/mem0-migration.md
    - 从mem0迁移至Learning SDK