replicate-cli
Original:🇺🇸 English
Translated
This skill provides comprehensive guidance for using the Replicate CLI to run AI models, create predictions, manage deployments, and fine-tune models. Use this skill when the user wants to interact with Replicate's AI model platform via command line, including running image generation models, language models, or any ML model hosted on Replicate. This skill should be used when users ask about running models on Replicate, creating predictions, managing deployments, fine-tuning models, or working with the Replicate API through the CLI.
2installs
Added on
NPX Install
npx skill4agent add rawveg/skillsforge-marketplace replicate-cliTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Replicate CLI
The Replicate CLI is a command-line tool for interacting with Replicate's AI model platform. It enables running predictions, managing models, creating deployments, and fine-tuning models directly from the terminal.
Authentication
Before using the Replicate CLI, set the API token:
bash
export REPLICATE_API_TOKEN=<token-from-replicate.com/account>Alternatively, authenticate interactively:
bash
replicate auth loginVerify authentication:
bash
replicate account currentCore Commands
Running Predictions
The primary use case is running predictions against hosted models.
Basic prediction:
bash
replicate run <owner/model> input_key=valueExamples:
Image generation:
bash
replicate run stability-ai/sdxl prompt="a studio photo of a rainbow colored corgi"Text generation with streaming:
bash
replicate run meta/llama-2-70b-chat --stream prompt="Tell me a joke"Prediction flags:
- - Stream output tokens in real-time (for text models)
--stream - - Submit prediction without waiting for completion
--no-wait - - Open prediction in browser
--web - - Output result as JSON
--json - - Save outputs to local directory
--save - - Specify output directory (default:
--output-directory <dir>)./{prediction-id}
Input Handling
File uploads: Prefix local file paths with :
@bash
replicate run nightmareai/real-esrgan image=@photo.jpgOutput chaining: Use template syntax to chain predictions:
{{.output}}bash
replicate run stability-ai/sdxl prompt="a corgi" | \
replicate run nightmareai/real-esrgan image={{.output[0]}}Model Operations
View model schema (see required inputs and outputs):
bash
replicate model schema <owner/model>
replicate model schema stability-ai/sdxl --jsonList models:
bash
replicate model list
replicate model list --jsonShow model details:
bash
replicate model show <owner/model>Create a new model:
bash
replicate model create <owner/name> \
--hardware gpu-a100-large \
--private \
--description "Model description"Model creation flags:
- - Hardware SKU (see
--hardware <sku>)references/hardware.md - /
--private- Visibility setting--public - - Model description
--description <text> - - Link to source repository
--github-url <url> - - License information
--license-url <url> - - Cover image for model page
--cover-image-url <url>
Training (Fine-tuning)
Fine-tune models using the training command:
bash
replicate train <base-model> \
--destination <owner/new-model> \
input_key=valueExample - Fine-tune SDXL with DreamBooth:
bash
replicate train stability-ai/sdxl \
--destination myuser/custom-sdxl \
--web \
input_images=@training-images.zip \
use_face_detection_instead=trueList trainings:
bash
replicate training listShow training details:
bash
replicate training show <training-id>Deployments
Deployments provide dedicated, always-on inference endpoints with predictable performance.
Create deployment:
bash
replicate deployments create <name> \
--model <owner/model> \
--hardware <sku> \
--min-instances 1 \
--max-instances 3Example:
bash
replicate deployments create text-to-image \
--model stability-ai/sdxl \
--hardware gpu-a100-large \
--min-instances 1 \
--max-instances 5Update deployment:
bash
replicate deployments update <name> \
--max-instances 10 \
--version <version-id>List deployments:
bash
replicate deployments listShow deployment details and schema:
bash
replicate deployments show <name>
replicate deployments schema <name>Hardware
List available hardware options:
bash
replicate hardware listSee for detailed hardware information and selection guidelines.
references/hardware.mdScaffolding
Create a local development environment from an existing prediction:
bash
replicate scaffold <prediction-id-or-url> --template=<node|python>This generates a project with the prediction's model and inputs pre-configured.
Command Aliases
For convenience, these aliases are available:
| Alias | Equivalent Command |
|---|---|
| |
| |
| |
Short aliases for subcommands:
- =
replicate mreplicate model - =
replicate preplicate prediction - =
replicate treplicate training - =
replicate dreplicate deployments - =
replicate hwreplicate hardware - =
replicate areplicate account
Common Workflows
Image Generation Pipeline
Generate an image and upscale it:
bash
replicate run stability-ai/sdxl \
prompt="professional photo of a sunset" \
negative_prompt="blurry, low quality" | \
replicate run nightmareai/real-esrgan \
image={{.output[0]}} \
--saveCheck Model Inputs Before Running
Always check the model schema to understand required inputs:
bash
replicate model schema owner/model-nameBatch Processing
Run predictions and save outputs:
bash
for prompt in "cat" "dog" "bird"; do
replicate run stability-ai/sdxl prompt="$prompt" --save --output-directory "./outputs/$prompt"
doneMonitor Long-Running Tasks
Submit without waiting, then check status:
bash
# Submit
replicate run owner/model input=value --no-wait --json > prediction.json
# Check status later
replicate prediction show $(jq -r '.id' prediction.json)Best Practices
-
Always check schema first - Runto understand required and optional inputs before running predictions.
replicate model schema <model> -
Use streaming for text models - Addflag when running language models to see output in real-time.
--stream -
Save outputs explicitly - Useand
--saveto organize prediction outputs.--output-directory -
Use JSON output for automation - Addflag when parsing outputs programmatically.
--json -
Open in web for debugging - Addflag to view predictions in the Replicate dashboard for detailed logs.
--web -
Chain predictions efficiently - Use thesyntax to pass outputs between models without intermediate saves.
{{.output}}
Troubleshooting
Authentication errors:
- Verify is set correctly
REPLICATE_API_TOKEN - Run to test authentication
replicate account current
Model not found:
- Check model name format:
owner/model-name - Verify model exists at replicate.com
Input validation errors:
- Run to see required inputs
replicate model schema <model> - Check input types (string, number, file)
File upload issues:
- Ensure prefix is used for local files
@ - Verify file path is correct and file exists
Additional Resources
- Replicate documentation: https://replicate.com/docs
- Model explorer: https://replicate.com/explore
- API reference: https://replicate.com/docs/reference/http
- GitHub repository: https://github.com/replicate/cli