talk-normal-llm-prompt

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

talk-normal

talk-normal

Skill by ara.so — Daily 2026 Skills collection.
talk-normal is a system prompt (plus a shell-script helper) that strips AI slop — bullet-point padding, hollow affirmations, corporate filler — from any LLM while preserving all useful information. Tested at ~73% character reduction on GPT-4o-mini and GPT-5.4 with no information loss.

ara.so开发的技能工具——属于Daily 2026 Skills合集。
talk-normal是一个系统提示词(附带Shell脚本辅助工具),可去除任意LLM输出中的AI冗余内容——包括无意义的项目符号、空洞的套话、企业式冗余表述——同时保留所有有用信息。在GPT-4o-mini和GPT-5.4上测试显示,可减少约73%的字符数且无信息丢失。

How it works

工作原理

The project is a single
prompt.md
file (the system prompt) plus optional shell helpers. You copy the prompt text into the "System" field of any LLM interface or API call.
repo layout
├── prompt.md          ← the system prompt (main artifact)
├── CHANGELOG.md       ← rule history
├── CONTRIBUTING.md    ← how to add rules
└── TEST_RESULTS.md    ← before/after comparisons

该项目包含一个
prompt.md
文件(核心系统提示词)以及可选的Shell辅助工具。你只需将提示词文本复制到任意LLM界面或API调用的「System」字段中即可使用。
仓库结构
├── prompt.md          ← 核心系统提示词(主要产物)
├── CHANGELOG.md       ← 规则更新历史
├── CONTRIBUTING.md    ← 规则添加指南
└── TEST_RESULTS.md    ← 优化前后对比结果

Installation

安装步骤

1 — Clone the repo

1 — 克隆仓库

bash
git clone https://github.com/hexiecs/talk-normal.git
cd talk-normal
bash
git clone https://github.com/hexiecs/talk-normal.git
cd talk-normal

2 — Read the prompt

2 — 查看提示词

bash
cat prompt.md
bash
cat prompt.md

3 — Copy into your tool

3 — 复制到你的工具中

Paste the contents of
prompt.md
into:
  • ChatGPT → Settings → Customize ChatGPT → Custom Instructions → "How should ChatGPT respond?"
  • Claude.ai → Project Instructions
  • Cursor / Windsurf
    .cursorrules
    or global AI rules
  • API calls
    system
    parameter (see examples below)

prompt.md
的内容粘贴到以下位置:
  • ChatGPT → 设置 → 自定义ChatGPT → 自定义指令 →「ChatGPT应如何回应?」
  • Claude.ai → 项目指令
  • Cursor / Windsurf
    .cursorrules
    或全局AI规则
  • API调用
    system
    参数(见下方示例)

Using the prompt via API

通过API使用提示词

OpenAI (Python)

OpenAI(Python)

python
import os
from pathlib import Path
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

system_prompt = Path("prompt.md").read_text()

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user",   "content": "What is Python?"},
    ],
)
print(response.choices[0].message.content)
python
import os
from pathlib import Path
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

system_prompt = Path("prompt.md").read_text()

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user",   "content": "What is Python?"},
    ],
)
print(response.choices[0].message.content)

OpenAI (curl)

OpenAI(curl)

bash
SYSTEM=$(cat prompt.md | jq -Rs .)

curl https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d "{
    \"model\": \"gpt-4o-mini\",
    \"messages\": [
      {\"role\": \"system\", \"content\": $SYSTEM},
      {\"role\": \"user\",   \"content\": \"What is Python?\"}
    ]
  }"
bash
SYSTEM=$(cat prompt.md | jq -Rs .)

curl https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d "{
    \"model\": \"gpt-4o-mini\",
    \"messages\": [
      {\"role\": \"system\", \"content\": $SYSTEM},
      {\"role\": \"user\",   \"content\": \"What is Python?\"}
    ]
  }"

Anthropic Claude (Python)

Anthropic Claude(Python)

python
import os
from pathlib import Path
import anthropic

client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

system_prompt = Path("prompt.md").read_text()

message = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=1024,
    system=system_prompt,
    messages=[{"role": "user", "content": "Explain Docker in one paragraph."}],
)
print(message.content[0].text)
python
import os
from pathlib import Path
import anthropic

client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

system_prompt = Path("prompt.md").read_text()

message = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=1024,
    system=system_prompt,
    messages=[{"role": "user", "content": "Explain Docker in one paragraph."}],
)
print(message.content[0].text)

Google Gemini (Python)

Google Gemini(Python)

python
import os
from pathlib import Path
import google.generativeai as genai

genai.configure(api_key=os.environ["GEMINI_API_KEY"])

system_prompt = Path("prompt.md").read_text()

model = genai.GenerativeModel(
    model_name="gemini-1.5-flash",
    system_instruction=system_prompt,
)

response = model.generate_content("What is a neural network?")
print(response.text)
python
import os
from pathlib import Path
import google.generativeai as genai

genai.configure(api_key=os.environ["GEMINI_API_KEY"])

system_prompt = Path("prompt.md").read_text()

model = genai.GenerativeModel(
    model_name="gemini-1.5-flash",
    system_instruction=system_prompt,
)

response = model.generate_content("What is a neural network?")
print(response.text)

Ollama (local models)

Ollama(本地模型)

bash
SYSTEM=$(cat prompt.md)

ollama run llama3 \
  --system "$SYSTEM" \
  "What is a REST API?"
Or via the Ollama Python SDK:
python
import subprocess, json
from pathlib import Path

system_prompt = Path("prompt.md").read_text()

result = subprocess.run(
    ["ollama", "run", "llama3"],
    input=f"SYSTEM: {system_prompt}\nUSER: What is a REST API?",
    capture_output=True, text=True,
)
print(result.stdout)

bash
SYSTEM=$(cat prompt.md)

ollama run llama3 \
  --system "$SYSTEM" \
  "What is a REST API?"
或通过Ollama Python SDK:
python
import subprocess, json
from pathlib import Path

system_prompt = Path("prompt.md").read_text()

result = subprocess.run(
    ["ollama", "run", "llama3"],
    input=f"SYSTEM: {system_prompt}\nUSER: What is a REST API?",
    capture_output=True, text=True,
)
print(result.stdout)

Shell helper: one-liner wrapper

Shell辅助工具:单行包装器

A reusable shell function that injects the prompt automatically:
bash
undefined
一个可重复使用的Shell函数,可自动注入提示词:
bash
undefined

Add to ~/.bashrc or ~/.zshrc

添加到~/.bashrc或~/.zshrc

export TALK_NORMAL_PROMPT="$HOME/talk-normal/prompt.md"
asknormal() { local question="$*" local system system=$(cat "$TALK_NORMAL_PROMPT")
curl -s https://api.openai.com/v1/chat/completions
-H "Authorization: Bearer $OPENAI_API_KEY"
-H "Content-Type: application/json"
-d "$(jq -n
--arg sys "$system"
--arg q "$question"
'{model:"gpt-4o-mini",messages:[{role:"system",content:$sys},{role:"user",content:$q}]}' )" | jq -r '.choices[0].message.content' }

Usage:

```bash
source ~/.bashrc
asknormal "What is the CAP theorem?"

export TALK_NORMAL_PROMPT="$HOME/talk-normal/prompt.md"
asknormal() { local question="$*" local system system=$(cat "$TALK_NORMAL_PROMPT")
curl -s https://api.openai.com/v1/chat/completions
-H "Authorization: Bearer $OPENAI_API_KEY"
-H "Content-Type: application/json"
-d "$(jq -n
--arg sys "$system"
--arg q "$question"
'{model:"gpt-4o-mini",messages:[{role:"system",content:$sys},{role:"user",content:$q}]}' )" | jq -r '.choices[0].message.content' }

使用方法:

```bash
source ~/.bashrc
asknormal "What is the CAP theorem?"

Embedding in a project's AI config

嵌入到项目的AI配置中

Cursor (
.cursorrules
)

Cursor(
.cursorrules

bash
undefined
bash
undefined

Prepend talk-normal to your existing rules

将talk-normal规则添加到现有规则前

cat talk-normal/prompt.md > .cursorrules echo "" >> .cursorrules echo "# Project-specific rules below" >> .cursorrules cat your-existing-rules.md >> .cursorrules
undefined
cat talk-normal/prompt.md > .cursorrules echo "" >> .cursorrules echo "# 以下是项目专属规则" >> .cursorrules cat your-existing-rules.md >> .cursorrules
undefined

OpenAI Assistants API

OpenAI Assistants API

python
import os
from pathlib import Path
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
system_prompt = Path("talk-normal/prompt.md").read_text()

assistant = client.beta.assistants.create(
    name="Normal Assistant",
    instructions=system_prompt,
    model="gpt-4o-mini",
)
print(f"Assistant ID: {assistant.id}")

python
import os
from pathlib import Path
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
system_prompt = Path("talk-normal/prompt.md").read_text()

assistant = client.beta.assistants.create(
    name="Normal Assistant",
    instructions=system_prompt,
    model="gpt-4o-mini",
)
print(f"Assistant ID: {assistant.id}")

Combining with your own system prompt

与自定义系统提示词结合使用

talk-normal rules are additive — prepend them before your domain instructions:
python
from pathlib import Path

talk_normal = Path("talk-normal/prompt.md").read_text()

your_rules = """
You are a senior backend engineer. Answer questions about Python, Go, and distributed systems.
"""

combined_system = f"{talk_normal}\n\n---\n\n{your_rules}"

talk-normal的规则是可叠加的——将其放在你的领域专属指令之前:
python
from pathlib import Path

talk_normal = Path("talk-normal/prompt.md").read_text()

your_rules = """
你是一名资深后端工程师,请回答关于Python、Go和分布式系统的问题。
"""

combined_system = f"{talk_normal}\n\n---\n\n{your_rules}"

Common patterns

常见使用模式

Pattern 1: Measure verbosity reduction

模式1:测量冗余度降低比例

python
def verbosity_ratio(before: str, after: str) -> float:
    """Returns fraction of original length kept (lower = more concise)."""
    return len(after) / len(before)

before = "Python is a high-level, interpreted programming language known for its readability..."  # 1583 chars
after  = "Python is a high-level, interpreted language known for readability..."                  #  513 chars
print(f"{verbosity_ratio(before, after):.0%} of original length")  # → 32%
python
def verbosity_ratio(before: str, after: str) -> float:
    """返回保留的原文本长度比例(值越小表示越简洁)。"""
    return len(after) / len(before)

before = "Python is a high-level, interpreted programming language known for its readability..."  # 1583个字符
after  = "Python is a high-level, interpreted language known for readability..."                  #  513个字符
print(f"{verbosity_ratio(before, after):.0%} of original length")  # → 32%

Pattern 2: A/B test with and without the prompt

模式2:有无提示词的A/B测试

python
import os
from pathlib import Path
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
system_prompt = Path("talk-normal/prompt.md").read_text()

question = "What is Kubernetes?"

def ask(system: str | None, user: str) -> str:
    messages = []
    if system:
        messages.append({"role": "system", "content": system})
    messages.append({"role": "user", "content": user})
    resp = client.chat.completions.create(model="gpt-4o-mini", messages=messages)
    return resp.choices[0].message.content

without = ask(None, question)
with_prompt = ask(system_prompt, question)

print(f"Without: {len(without)} chars")
print(f"With:    {len(with_prompt)} chars")
print(f"Reduction: {(1 - len(with_prompt)/len(without)):.0%}")
python
import os
from pathlib import Path
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
system_prompt = Path("talk-normal/prompt.md").read_text()

question = "What is Kubernetes?"

def ask(system: str | None, user: str) -> str:
    messages = []
    if system:
        messages.append({"role": "system", "content": system})
    messages.append({"role": "user", "content": user})
    resp = client.chat.completions.create(model="gpt-4o-mini", messages=messages)
    return resp.choices[0].message.content

without = ask(None, question)
with_prompt = ask(system_prompt, question)

print(f"无提示词:{len(without)}个字符")
print(f"有提示词:{len(with_prompt)}个字符")
print(f"精简比例:{(1 - len(with_prompt)/len(without)):.0%}")

Pattern 3: Keep the prompt up to date

模式3:保持提示词更新

bash
undefined
bash
undefined

Pull latest rules from upstream

从上游拉取最新规则

cd talk-normal git pull origin main
cd talk-normal git pull origin main

Check what changed

查看更新内容

git log --oneline -10 cat CHANGELOG.md | head -50

---
git log --oneline -10 cat CHANGELOG.md | head -50

---

Contributing a new rule

贡献新规则

  1. Fork the repo and create a branch:
    git checkout -b rule/no-em-dashes
  2. Edit
    prompt.md
    — add your rule in plain imperative English
  3. Add an entry to
    CHANGELOG.md
  4. Open an Issue or PR describing: what slop the rule targets, a before/after example
bash
undefined
  1. Fork仓库并创建分支:
    git checkout -b rule/no-em-dashes
  2. 编辑
    prompt.md
    ——用简洁的祈使句添加你的规则
  3. CHANGELOG.md
    中添加更新记录
  4. 提交Issue或PR,说明:该规则针对哪种冗余内容,附上优化前后的示例
bash
undefined

Quick before/after test for your new rule

快速测试你的新规则

SYSTEM=$(cat prompt.md) echo "Test question" | asknormal # uses your modified prompt

---
SYSTEM=$(cat prompt.md) echo "测试问题" | asknormal # 使用你修改后的提示词

---

Troubleshooting

故障排查

SymptomFix
Model still uses bullet pointsEnsure the system prompt is in the
system
role, not prepended to
user
Prompt too long for context windowUse a smaller model or trim older messages;
prompt.md
is intentionally compact
Ollama ignores system promptSome quantized models have weak instruction-following; try
mistral
or
llama3
Rules conflict with your own system promptPut talk-normal rules first; add
# Override:
comment before conflicting rules
Response is too terse / lost informationThe prompt reduces filler, not facts — file an issue with a reproduction case

症状解决方法
模型仍使用项目符号确保系统提示词放在
system
角色中,而非附加到
user
内容前
提示词过长超出上下文窗口使用更小的模型或修剪历史消息;
prompt.md
已被设计得尽可能紧凑
Ollama忽略系统提示词部分量化模型的指令遵循能力较弱;尝试使用
mistral
llama3
模型
规则与自定义系统提示词冲突将talk-normal规则放在前面;在冲突规则前添加
# Override:
注释
回应过于简洁/丢失信息提示词仅去除冗余内容,不会丢失事实;提交Issue并附上复现案例

Key facts for agents

面向Agent的关键信息

  • Primary artifact:
    prompt.md
    — copy its text verbatim as the
    system
    message
  • No code to run: this is a prompt, not a library; no
    pip install
    , no build step
  • Model-agnostic: works with GPT, Claude, Gemini, LLaMA, Mistral, etc.
  • Tested reduction: ~72–73% character reduction, zero information loss on 10-question benchmark
  • License: MIT — use freely in commercial products
  • 核心产物
    prompt.md
    ——直接复制其文本作为
    system
    消息
  • 无需运行代码:这是一个提示词,而非库;无需
    pip install
    ,无需构建步骤
  • 模型无关:适用于GPT、Claude、Gemini、LLaMA、Mistral等各类模型
  • 测试精简比例:在10题基准测试中,可减少约72–73%的字符数且无信息丢失
  • 许可证:MIT——可免费用于商业产品