agents-py
Original:🇺🇸 English
Translated
Build LiveKit Agent backends in Python. Use this skill when creating voice AI agents, voice assistants, or any realtime AI application using LiveKit's Python Agents SDK (livekit-agents). Covers AgentSession, Agent class, function tools, STT/LLM/TTS models, turn detection, and multi-agent workflows.
5installs
Added on
NPX Install
npx skill4agent add codestackr/livekit-skills agents-pyTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →LiveKit Agents Python SDK
Build voice AI agents with LiveKit's Python Agents SDK.
LiveKit MCP server tools
This skill works alongside the LiveKit MCP server, which provides direct access to the latest LiveKit documentation, code examples, and changelogs. Use these tools when you need up-to-date information that may have changed since this skill was created.
Available MCP tools:
- - Search the LiveKit docs site
docs_search - - Fetch specific documentation pages by path
get_pages - - Get recent releases and updates for LiveKit packages
get_changelog - - Search LiveKit repositories for code examples
code_search - - Browse 100+ Python agent examples
get_python_agent_example
When to use MCP tools:
- You need the latest API documentation or feature updates
- You're looking for recent examples or code patterns
- You want to check if a feature has been added in recent releases
- The local references don't cover a specific topic
When to use local references:
- You need quick access to core concepts covered in this skill
- You're working offline or want faster access to common patterns
- The information in the references is sufficient for your needs
Use MCP tools and local references together for the best experience.
References
Consult these resources as needed:
- ./references/livekit-overview.md -- LiveKit ecosystem overview and how these skills work together
- ./references/agent-session.md -- AgentSession lifecycle, events, and configuration
- ./references/tools.md -- Function tools, RunContext, and tool results
- ./references/models.md -- STT, LLM, TTS model strings and plugin configuration
- ./references/workflows.md -- Multi-agent handoffs, Tasks, TaskGroups, and pipeline nodes
Installation
bash
uv add "livekit-agents[silero,turn-detector]~=1.3" \
"livekit-plugins-noise-cancellation~=0.2" \
"python-dotenv"Environment variables
Use the LiveKit CLI to load your credentials into a file:
.env.localbash
lk app env -wOr manually create a file:
.env.localbash
LIVEKIT_API_KEY=your_api_key
LIVEKIT_API_SECRET=your_api_secret
LIVEKIT_URL=wss://your-project.livekit.cloudQuick start
Basic agent with STT-LLM-TTS pipeline
python
from dotenv import load_dotenv
from livekit import agents, rtc
from livekit.agents import AgentSession, Agent, AgentServer, room_io
from livekit.plugins import noise_cancellation, silero
from livekit.plugins.turn_detector.multilingual import MultilingualModel
load_dotenv(".env.local")
class Assistant(Agent):
def __init__(self) -> None:
super().__init__(
instructions="""You are a helpful voice AI assistant.
Keep responses concise, 1-3 sentences. No markdown or emojis.""",
)
server = AgentServer()
@server.rtc_session()
async def entrypoint(ctx: agents.JobContext):
session = AgentSession(
stt="assemblyai/universal-streaming:en",
llm="openai/gpt-4.1-mini",
tts="cartesia/sonic-3:9626c31c-bec5-4cca-baa8-f8ba9e84c8bc",
vad=silero.VAD.load(),
turn_detection=MultilingualModel(),
)
await session.start(
room=ctx.room,
agent=Assistant(),
room_options=room_io.RoomOptions(
audio_input=room_io.AudioInputOptions(
noise_cancellation=lambda params: noise_cancellation.BVCTelephony()
if params.participant.kind == rtc.ParticipantKind.PARTICIPANT_KIND_SIP
else noise_cancellation.BVC(),
),
),
)
await session.generate_reply(
instructions="Greet the user and offer your assistance."
)
if __name__ == "__main__":
agents.cli.run_app(server)Basic agent with realtime model
python
from dotenv import load_dotenv
from livekit import agents, rtc
from livekit.agents import AgentSession, Agent, AgentServer, room_io
from livekit.plugins import openai, noise_cancellation
load_dotenv(".env.local")
class Assistant(Agent):
def __init__(self) -> None:
super().__init__(
instructions="You are a helpful voice AI assistant."
)
server = AgentServer()
@server.rtc_session()
async def entrypoint(ctx: agents.JobContext):
session = AgentSession(
llm=openai.realtime.RealtimeModel(voice="coral")
)
await session.start(
room=ctx.room,
agent=Assistant(),
room_options=room_io.RoomOptions(
audio_input=room_io.AudioInputOptions(
noise_cancellation=lambda params: noise_cancellation.BVCTelephony()
if params.participant.kind == rtc.ParticipantKind.PARTICIPANT_KIND_SIP
else noise_cancellation.BVC(),
),
),
)
await session.generate_reply(
instructions="Greet the user and offer your assistance."
)
if __name__ == "__main__":
agents.cli.run_app(server)Core concepts
Agent class
Define agent behavior by subclassing :
Agentpython
from livekit.agents import Agent, function_tool
class MyAgent(Agent):
def __init__(self) -> None:
super().__init__(
instructions="Your system prompt here",
)
async def on_enter(self) -> None:
"""Called when agent becomes active."""
await self.session.generate_reply(
instructions="Greet the user"
)
async def on_exit(self) -> None:
"""Called before agent hands off to another agent."""
pass
@function_tool()
async def my_tool(self, param: str) -> str:
"""Tool description for the LLM."""
return f"Result: {param}"AgentSession
The session orchestrates the voice pipeline:
python
session = AgentSession(
stt="assemblyai/universal-streaming:en",
llm="openai/gpt-4.1-mini",
tts="cartesia/sonic-3:voice_id",
vad=silero.VAD.load(),
turn_detection=MultilingualModel(),
)Key methods:
- - Start the session
session.start(room, agent) - - Speak text directly
session.say(text) - - Generate LLM response
session.generate_reply(instructions) - - Stop current speech
session.interrupt() - - Switch to different agent
session.update_agent(new_agent)
Function tools
Use the decorator:
@function_toolpython
from livekit.agents import function_tool, RunContext
@function_tool()
async def get_weather(self, context: RunContext, location: str) -> str:
"""Get the current weather for a location."""
return f"Weather in {location}: Sunny, 72°F"Running the agent
bash
# Development mode with auto-reload
uv run agent.py dev
# Console mode (local testing)
uv run agent.py console
# Production mode
uv run agent.py start
# Download required model files
uv run agent.py download-filesLiveKit Inference model strings
Use model strings for simple configuration without API keys:
STT (Speech-to-Text):
- - AssemblyAI streaming
"assemblyai/universal-streaming:en" - - Deepgram Nova
"deepgram/nova-3:en" - - Cartesia STT
"cartesia/ink"
LLM (Large Language Model):
- - GPT-4.1 mini (recommended)
"openai/gpt-4.1-mini" - - GPT-4.1
"openai/gpt-4.1" - - GPT-5
"openai/gpt-5" - - Gemini 3 Flash
"gemini/gemini-3-flash" - - Gemini 2.5 Flash
"gemini/gemini-2.5-flash"
TTS (Text-to-Speech):
- - Cartesia Sonic 3
"cartesia/sonic-3:{voice_id}" - - ElevenLabs
"elevenlabs/eleven_turbo_v2_5:{voice_id}" - - Deepgram Aura
"deepgram/aura:{voice}"
Best practices
- Always use LiveKit Inference model strings as the default for STT, LLM, and TTS. This eliminates the need to manage individual provider API keys. Only use plugins when you specifically need custom models, voice cloning, Anthropic Claude, or self-hosted models.
- Use adaptive noise cancellation with a lambda to detect SIP participants and apply appropriate noise cancellation (BVCTelephony for phone calls, BVC for standard participants).
- Use MultilingualModel turn detection for natural conversation flow.
- Structure prompts with Identity, Output rules, Tools, Goals, and Guardrails sections.
- Test with console mode before deploying to LiveKit Cloud.
- Use to load LiveKit Cloud credentials into your environment.
lk app env -w