OpenStoryline Usage Skill
You are responsible for executing the actual video editing process of OpenStoryline on the premise that it has been completely installed.
OpenStoryline is an editing Agent that allows users to edit videos via natural language conversations using their own materials. It has built-in features such as material search, content understanding, subtitle generation, text-to-speech, etc., and users can put forward specific editing/modification suggestions multiple times.
The goal is to use existing scripts to stably complete a closed-loop process from starting services to producing videos; and to support continuing conversations, secondary editing, and regenerating new videos with the same
.
Scope
This skill only handles "usage and editing":
- Check and modify the necessary fields in .
- Start the MCP server.
- Start
uvicorn agent_fastapi:app
.
- Create a session and send editing requests.
- Wait for and verify the output video products.
- Continue conversations on the same to perform secondary editing.
- Verify whether a new is generated after secondary editing.
It does not handle the complete installation process (dependency installation, model download, resource download, etc.), which falls under the scope of the installation skill. If problems are encountered during startup and you suspect they are installation issues, refer to the installation skill openstoryline-install
Core Rules
- By default, only listen on ; do not actively expose to the local area network.
- Prioritize reusing existing scripts instead of reinventing the wheel:
- Configuration modification script: located at in the code repository
- Web service bridging script is located at
scripts/bridge_openstoryline.py
in the current skill directory. Please locate the current skill directory first, then concatenate with scripts/bridge_openstoryline.py
- Long-running services (MCP / Web) must be started as "long-running processes" and their logs must be monitored continuously; do not treat startup commands as one-time detection commands.
- Do not append the following wrappers to startup commands:
- And other wrappers that truncate logs, exit early, or force kill processes
- Ask the user which materials they need to edit and their paths.
- The returned after creating a session in the first round must be saved; subsequent conversations and secondary editing depend on it.
- If the server prompts "The previous message has not been completed, please try again later", do not create a new session; prioritize waiting, and if necessary, only terminate the stuck local bridge process, then continue to retry with the original .
- Do not actively terminate MCP / Web services during task execution unless the user explicitly requests to stop them or the services are confirmed to be inactive.
- After completing each task, clearly return the following to the user:
- The full path of the final video
- If secondary editing is performed, also indicate whether a new output file has been generated
- The example commands below use
source .venv/bin/activate
as an example; you need to replace it with the correct command according to the user's actual environment (e.g., ).
- If a port is occupied, prioritize switching to another port.
OpenClaw Execution Strategy (Important)
If you are OpenClaw, pay attention to the following key points:
How to run long-running services
For the following two types of commands:
PYTHONPATH=src python -m open_storyline.mcp.server
uvicorn agent_fastapi:app --host 127.0.0.1 --port 8005
They must be handled as "long-running processes":
- Start with and enable PTY (if the tool supports , enable it).
- Do not immediately judge failure after startup; MCP Server startup may take several minutes.
- Continuously observe the returned content using / ; do not rush to kill the process.
- Proceed to the next step only after seeing the success log.
How to run one-time commands
The following commands are suitable for ordinary one-time
:
- Modify
- Create a session
- Continue conversations on an existing
- Search for
- Check file size
Which log is most useful
In practice, Web service logs are the most suitable for viewing editing progress.
Common normal process nodes include:
If the bridge script is still waiting, it does not mean the system is not working; it may just be that the server is still processing.
Standard Workflow (OpenClaw)
0) Confirm repository root directory
in subsequent commands refers to the root directory of the OpenStoryline repository, for example:
bash
/Users/yourname/Desktop/code/Openstoryline/FireRed-Openstoryline
All commands are executed in this directory by default, and the environment is activated first.
1) Enter project root directory and configure
Required configuration
Before starting editing, the following 6 fields must have values, otherwise model calls will fail. You must first ask the user for the specific values of these fields, then modify them using the script:
Directly usable commands (execute in the repository root directory, taking .venv as an example):
bash
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set llm.model=REPLACE_WITH_REAL_MODEL
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set llm.base_url=REPLACE_WITH_REAL_URL
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set llm.api_key=sk-REPLACE_WITH_REAL_KEY
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set vlm.model=REPLACE_WITH_REAL_MODEL
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set vlm.base_url=REPLACE_WITH_REAL_URL
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set vlm.api_key=sk-REPLACE_WITH_REAL_KEY
Optional configuration
The following are common optional settings, which can be configured as needed:
1. MCP port (in case of port conflict)
bash
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set local_mcp_server.port=8002
2. Material retrieval (Pexels)
bash
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set search_media.pexels_api_key=REPLACE_WITH_PEXELS_KEY
3. TTS (if voiceover is needed)
You only need to fill in one of the following three providers:
bash
# minimax
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set generate_voiceover.providers.minimax.base_url=https://api.minimax.chat/v1/t2a_v2
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set generate_voiceover.providers.minimax.api_key=REPLACE_WITH_MINIMAX_KEY
# bytedance
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set generate_voiceover.providers.bytedance.uid=REPLACE_UID
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set generate_voiceover.providers.bytedance.appid=REPLACE_APPID
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set generate_voiceover.providers.bytedance.access_token=REPLACE_ACCESS_TOKEN
# 302
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set generate_voiceover.providers.302.base_url=https://REPLACE_BASE_URL
cd <repo-root> && source .venv/bin/activate && python scripts/update_config.py --config ./config.toml --set generate_voiceover.providers.302.api_key=REPLACE_API_KEY
2) Start MCP Server
Note that starting the MCP Server may take several minutes; please be patient and do not rush to kill the process.
macOS/Linux:
bash
cd <repo-root> && source .venv/bin/activate && PYTHONPATH=src python -m open_storyline.mcp.server
Windows:
powershell
cd <repo-root>
. .venv\Scripts\Activate.ps1
$env:PYTHONPATH="src"
python -m open_storyline.mcp.server
Startup is considered successful when you see logs similar to the following:
text
Uvicorn running on http://127.0.0.1:8001
3) Start Web service (uvicorn)
macOS/Linux:
bash
cd <repo-root> && source .venv/bin/activate && uvicorn agent_fastapi:app --host 127.0.0.1 --port 8005
Startup is successful when the following logs appear:
text
INFO: Started server process [PID]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8005 (Press CTRL+C to quit)
4) Create editing session
Note to replace the address with the actual address of the Web service started in the previous step.
bash
curl -s -X POST "http://127.0.0.1:8005/api/sessions"
This step will create an editing session and get a
, which is
very important! As long as the Web service is running, you can use this
to upload materials and conduct multi-round conversations, so be sure to keep it.
5) Upload materials
Option 1:
bash
curl -s -X POST "http://127.0.0.1:8005/api/sessions/{session_id}/media" -F "files=@/absolute/path/input.mp4"
Option 2: For large files, it is recommended to directly copy locally
bash
cp path/to/source.mp4 <repo-root>/outputs/{session_id}/media
6) Start editing conversation (session is created automatically)
Use the bridge script built into the Skill.
- skills-root: The directory where the current skill is located.
- session-id: Fill in the obtained in the previous step
- base-url: Fill in the URL of the Web service
- prompt: The user's editing requirements
- lang: The language used by the user, currently only supports zh / en, set it once.
bash
cd <repo-root> && source .venv/bin/activate && python <skills-root>/scripts/bridge_openstoryline.py \
--session-id <session_id> \
--base-url http://127.0.0.1:8005 \
--prompt "剪一个小红书风格视频" \
--lang "zh"
7) Wait and observe editing progress
Sometimes the editing Agent will first ask for editing requirements; sometimes it will start editing directly. Editing may take several minutes, especially when it involves copywriting, voiceover, and rendering.
Correct approach
- Continuously the process session corresponding to the current bridge script.
- At the same time, check the Web service logs. The editing Agent will update its progress in real-time.
- As long as the Web service logs are still progressing, continue waiting and do not restart the service casually.
Practical experience
If the bridge command has not returned yet, but nodes are already running in the Web service logs, this usually indicates that the server is still working normally, so do not misjudge it as a failure.
8) Second round: Continue chatting in the same session
According to the assistant's reply, continue to send editing requests to it. For example, if the assistant has made an editing plan and requests confirmation, then:
bash
cd <repo-root> && source .venv/bin/activate && python <skills-root>/scripts/bridge_openstoryline.py \
--base-url http://127.0.0.1:8005 \
--session-id <previous session_id> \
--prompt "开始剪辑"
Or if adjustments are needed:
bash
cd <repo-root> && source .venv/bin/activate && python <skills-root>/scripts/bridge_openstoryline.py \
--base-url http://127.0.0.1:8005 \
--session-id <previous session_id> \
--prompt "使用欢快的BGM"
9) Check the first-round output product
Generally, the output of the editing agent will directly write out the path of the editing product.
If not, first check:
bash
cd <repo-root> && find .storyline/.server_cache/<session_id> -name "output_*.mp4" 2>/dev/null
Judgment standard
Editing is considered successful if there is an
file.
10) Send the video
Send the generated video to the user for viewing and ask for their feedback.
Guide for sending videos in OpenClaw + Feishu APP scenario
If you are OpenClaw and the user is using the Feishu mobile APP, follow the exclusive guide below. Requirements:
- Python 3.6+
- has been installed
bash
python3 -m pip install requests
- OpenClaw has configured the Feishu channel
Example of running the script: This script will automatically read Feishu credentials from ~/.openclaw/openclaw.json.
Selection of receive-id:
- oc_xxx -> chat_id: Send to group chat or current one-on-one session, highly recommended
- ou_xxx -> open_id: Send to a specified user
- on_xxx -> user_id: Only use it when you clearly get a user_id
bash
cd <repo-root> && source .venv/bin/activate && python <skills-root>/scripts/feishu_file_sender.py --help
cd <repo-root> && source .venv/bin/activate && python <skills-root>/scripts/feishu_file_sender.py --file /absolute/path/to/video.mp4 --receive-id-type chat_id --receive-id oc_xxx
11) Secondary editing
If the user is not satisfied with the video content after it has been generated,
you can reuse the same to continue modifying the video and generate a new
.
Example: Modify copywriting style
bash
cd <repo-root> && source .venv/bin/activate && python <skills-root>/scripts/bridge_openstoryline.py \
--base-url http://127.0.0.1:8005 \
--session-id <session_id> \
--prompt "帮我把文案换成更欢乐、更有活力的风格"
- Under the same , the system will re-run the corresponding nodes and then re-render.
- A new directory will appear under the same .
- A new will be generated in the new directory.
Security Note
Only change to
when the user explicitly requests access via mobile phone / LAN.
Also remind: Only use it in a trusted network, avoid exposing it to the public network.