aliyun-animate-anyone
Original:🇺🇸 English
Translated
1 scriptsChecked / no sensitive code detected
Use when generating dance or motion-transfer videos with Alibaba Cloud Model Studio AnimateAnyone (`animate-anyone-gen2`) using a detected character image and an action template. Use when cloning motion from a dance/action video into a target character image.
2installs
Sourcecinience/alicloud-skills
Added on
NPX Install
npx skill4agent add cinience/alicloud-skills aliyun-animate-anyoneTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Category: provider
Model Studio AnimateAnyone
Validation
bash
mkdir -p output/aliyun-animate-anyone
python -m py_compile skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py && echo "py_compile_ok" > output/aliyun-animate-anyone/validate.txtPass criteria: command exits 0 and is generated.
output/aliyun-animate-anyone/validate.txtOutput And Evidence
- Save normalized request payloads, detection outputs, template IDs, and task polling snapshots under .
output/aliyun-animate-anyone/ - Record whether the result should keep the reference image background or the source video background.
Use AnimateAnyone when the task needs motion transfer from a template video rather than plain talking-head animation.
Critical model names
Use these exact model strings:
animate-anyone-detect-gen2animate-anyone-template-gen2animate-anyone-gen2
Selection guidance:
- Run image detection first.
- Run template generation on the source motion video.
- Use for the final video job.
animate-anyone-gen2
Prerequisites
- China mainland (Beijing) only.
- Set in your environment, or add
DASHSCOPE_API_KEYtodashscope_api_key.~/.alibabacloud/credentials - Input files must be public HTTP/HTTPS URLs.
Normalized interface (video.animate_anyone)
Detect Request
- (string, optional): default
modelanimate-anyone-detect-gen2 - (string, required)
image_url
Template Request
- (string, optional): default
modelanimate-anyone-template-gen2 - (string, required)
video_url
Generate Request
- (string, optional): default
modelanimate-anyone-gen2 - (string, required)
image_url - (string, required)
template_id - (bool, optional): whether to keep the input image background
use_ref_img_bg
Response
- (string)
task_id - (string)
task_status - (string, when finished)
video_url
Quick start
bash
python skills/ai/video/aliyun-animate-anyone/scripts/prepare_animate_anyone_request.py \
--image-url "https://example.com/dancer.png" \
--template-id "tmpl_xxx" \
--use-ref-img-bgOperational guidance
- The action template must come from the official template-generation API.
- Full-body images work best when ; half-body images are not recommended in that mode.
use_ref_img_bg=false - This skill is best for dancing or large body motion transfer, not generic talking-head tasks.
Output location
- Default output:
output/aliyun-animate-anyone/request.json - Override base dir with .
OUTPUT_DIR
References
references/sources.md