ai-vendor-evaluation

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

AI Vendor Evaluation

AI供应商评估

Framework: Venkatesan, R. and Lecinski, J. (2026) The AI Marketing Canvas, 2nd ed. Stanford Business Books.
Position in the Canvas: Step 2 — Experimentation. Use this skill to select which tools enter the experiment stage. Use
meta-ai-tools-audit
as the reference catalogue when the client does not yet have a shortlist. Use
playbook-ai-automation-workflow
once a tool has been selected and an automation build is underway.

<!-- dual-compat:start -->
框架: Venkatesan, R. 和 Lecinski, J.(2026)《AI营销画布》,第二版,斯坦福商业图书。
画布中的位置: 第2阶段——实验阶段。使用此技能来选择进入实验阶段的工具。当客户尚未有候选清单时,将
meta-ai-tools-audit
作为参考目录。当工具选定并开始构建自动化流程时,使用
playbook-ai-automation-workflow

<!-- dual-compat:start -->

Use when

适用场景

  • Structured 8-factor vendor evaluation framework for AI marketing tools, based on Venkatesan & Lecinski's The AI Marketing Canvas (2nd ed., Stanford Business Books, 2026). Scores each tool against EA market accessibility, data requirements, integration compatibility, team capability, and total cost in UGX, then produces a shortlist with 30-day experiment briefs. Invoke when a client has completed the ai-readiness-diagnostic and is at Canvas Step 2 (Experimentation) and is ready to select specific AI tools for structured trials. Also invoke when a client wants to compare 2–4 named tools before purchasing or committing budget.
  • Use this skill when it is the closest match to the requested deliverable or workflow.
  • 基于Venkatesan & Lecinski所著《AI营销画布》(第二版,斯坦福商业图书,2026年)的AI营销工具结构化8维度供应商评估框架。该框架针对东非(EA)市场可及性、数据要求、集成兼容性、团队能力以及以乌干达先令(UGX)计算的总成本对每个工具进行评分,随后生成包含30天实验简报的候选工具清单。当客户完成ai-readiness-diagnostic并处于画布第2阶段(实验阶段),准备选择特定AI工具进行结构化试验时,可调用此框架。此外,当客户在采购或确定预算前想要比较2-4个指定工具时,也可调用该框架。
  • 当此技能与请求的交付成果或工作流程最匹配时使用。

Do not use when

不适用场景

  • Do not use this skill for graphic design, video production, software development, or legal advice beyond the repository's stated scope.
  • Do not use it when another skill in this repository is clearly more specific to the requested deliverable.
  • 请勿将此技能用于图形设计、视频制作、软件开发或超出知识库规定范围的法律咨询。
  • 当知识库中另有技能明显更符合请求的交付成果时,请勿使用此技能。

Workflow

工作流程

  1. Collect the required inputs or source material before drafting, unless this skill explicitly generates the intake itself.
  2. Follow the section order and decision rules in this
    SKILL.md
    ; do not skip mandatory steps or required fields.
  3. Review the draft against the quality criteria, then deliver the final output in markdown unless the skill specifies another format.
  1. 在起草前收集所需输入或源材料,除非此技能明确说明可自行生成输入内容。
  2. 遵循本
    SKILL.md
    中的章节顺序和决策规则;请勿跳过必填步骤或必填字段。
  3. 根据质量标准审核草稿,然后以markdown格式交付最终输出,除非技能指定其他格式。

Anti-Patterns

反模式

  • Do not invent client facts, performance data, budgets, or approvals that were not provided or clearly inferred from evidence.
  • Do not skip required inputs, mandatory sections, or quality checks just to make the output shorter.
  • Do not drift into out-of-scope work such as code implementation, design production, or unsupported legal conclusions.
  • 请勿编造未提供或无法从证据中明确推断的客户事实、绩效数据、预算或审批信息。
  • 请勿为了缩短输出内容而跳过必填输入、必填章节或质量检查。
  • 请勿偏离范围开展工作,例如代码实现、设计制作或无依据的法律结论。

Outputs

输出成果

  • An AI-focused strategy, audit, system design, or prompt asset in markdown with human review and control points.
  • 经过人工审核和管控的AI相关策略、审计报告、系统设计或提示资产,以markdown格式呈现。

References

参考资料

  • Use the inline instructions in this skill now. If a
    references/
    directory is added later, treat its files as the deeper source material and keep this
    SKILL.md
    execution-focused.
<!-- dual-compat:end -->
  • 目前使用本技能中的内嵌说明。若后续添加
    references/
    目录,将其文件作为深层源材料,同时保持本
    SKILL.md
    的执行导向。
<!-- dual-compat:end -->

Required Input

必填输入

Ask for all of the following before generating any output:
  1. Client business name — exact trading name
  2. Industry — sector and sub-sector (e.g. retail > fashion, NGO > health)
  3. Country / city — default is Uganda; note if outside EA
  4. Current Canvas step — confirm from
    ai-readiness-diagnostic
    output
  5. Specific marketing problem to solve — a named task, not "we want AI" (e.g. "write 20 Instagram captions per month", "qualify leads from our Facebook ads", "send WhatsApp order confirmations automatically")
  6. Tools being evaluated — names of up to 4 tools the client is considering; if the client has no shortlist, prompt them to run
    meta-ai-tools-audit
    first
  7. Current tech stack — list what the client already uses: CRM, email platform, social scheduler, payment platform, website CMS, WhatsApp setup
  8. Monthly tool budget in UGX — if unknown, ask for a range
  9. Team size and technical level — number of people who will use the tool and their comfort level (non-technical / basic digital skills / comfortable with no-code tools / has developer support)
Do not proceed until all nine inputs are confirmed.

在生成任何输出前,请收集以下所有信息:
  1. 客户企业名称 —— 准确的交易名称
  2. 行业 —— 行业及细分领域(例如:零售>时尚、非政府组织>健康)
  3. 国家/城市 —— 默认是乌干达;若在东非(EA)以外地区请注明
  4. 当前画布阶段 —— 根据
    ai-readiness-diagnostic
    的输出确认
  5. 需解决的具体营销问题 —— 明确的任务,而非“我们想要AI”(例如:“每月撰写20条Instagram文案”、“筛选Facebook广告的潜在客户”、“自动发送WhatsApp订单确认信息”)
  6. 待评估工具 —— 客户考虑的最多4个工具名称;若客户暂无候选清单,请提示他们先运行
    meta-ai-tools-audit
  7. 当前技术栈 —— 列出客户已使用的工具:CRM、邮件平台、社交调度工具、支付平台、网站CMS、WhatsApp设置
  8. 每月工具预算(UGX) —— 若未知,请询问预算范围
  9. 团队规模与技术水平 —— 将使用该工具的人数及其技术熟练度(非技术人员/基础数字技能/熟悉无代码工具/有开发人员支持)
在确认所有9项输入前,请勿继续推进。

Evaluation Framework — 8 Factors

评估框架——8个维度

Apply all 8 factors to every tool. Do not produce partial scorecards. Score each factor 1–5 using the criteria below. Sum to a total out of 40.

对每个工具应用全部8个维度进行评估。请勿生成部分评分卡。 根据以下标准为每个维度评分1-5分,总分满分为40分。

Factor 1 — Use Case Fit

维度1——用例适配度

Does the tool address the specific, named marketing problem the client stated?
ScoreCriterion
5Purpose-built for this exact task; vendor's primary use case matches client's problem
4Strong fit; tool does this well even if it does other things too
3Adequate fit; the feature exists but is not the tool's core strength
2Marginal fit; requires significant configuration to address the use case
1Generic/broad; vendor claims the tool "does everything" — lack of focus signals lack of depth
Red flag: any vendor positioning the tool as an all-in-one AI platform without a primary specialisation. Record this explicitly in the scorecard.

该工具是否能解决客户提出的特定营销问题?
评分标准
5专为该任务定制;供应商的核心用例与客户问题完全匹配
4适配性强;即使工具具备其他功能,完成该任务的表现依然出色
3适配性尚可;具备相关功能但并非工具的核心优势
2适配性薄弱;需要大量配置才能满足用例需求
1通用/宽泛;供应商宣称工具“无所不能”——缺乏针对性意味着功能不够深入
红色预警:任何将工具定位为全功能AI平台但无核心专长的供应商。请在评分卡中明确记录此情况。

Factor 2 — Data Requirements

维度2——数据要求

What data does the tool need to function, and does the client have it?
ScoreCriterion
5Works entirely with data the client already holds and controls
4Requires one additional data source the client can readily obtain
3Requires moderate data preparation; client has the data but it is not structured
2Requires data the client does not currently have
1Requires data the client cannot legally collect
Flag any data requirement that may conflict with the Uganda Data Protection and Privacy Act 2019 (PDPA 2019). In particular: personal data collection, third-party data sharing, cross-border data transfer, and automated profiling of individuals. Record the flag in the scorecard even if the score is high.

该工具运行需要哪些数据,客户是否拥有这些数据?
评分标准
5完全使用客户已持有并可控的数据运行
4需要一个客户可轻松获取的额外数据源
3需要适度的数据准备;客户拥有数据但未结构化
2需要客户当前未持有的数据
1需要客户无法合法收集的数据
标记任何可能与《2019年乌干达数据保护与隐私法案》(PDPA 2019)冲突的数据要求。特别注意:个人数据收集、第三方数据共享、跨境数据传输以及对个人的自动化分析。即使评分较高,也请在评分卡中记录该预警。

Factor 3 — Integration Compatibility

维度3——集成兼容性

Does the tool connect to what the client already uses?
ScoreCriterion
5Native connectors to 2+ tools in the client's current stack; no developer work needed
4Zapier or Make connector available; straightforward to link to existing tools
3API available; requires some technical setup but no full developer resource
2Limited integration; one connector available but not for client's key tools
1Requires replacing existing tools or a full technical implementation
For any use case involving WhatsApp or SMS, check for Africa's Talking integration and note the result explicitly. Africa's Talking is the default recommendation for EA WhatsApp/SMS automation.

该工具能否与客户已使用的工具对接?
评分标准
5与客户当前技术栈中的2个以上工具原生对接;无需开发工作
4提供Zapier或Make连接器;可直接与现有工具链接
3提供API;需要一些技术设置但无需全职开发资源
2集成能力有限;仅提供一个连接器但不适用于客户的核心工具
1需要替换现有工具或进行完整的技术实施
对于涉及WhatsApp或SMS的任何用例,请检查是否支持Africa's Talking集成并明确记录结果。Africa's Talking是东非(EA)地区WhatsApp/SMS自动化的默认推荐工具。

Factor 4 — EA Market Accessibility

维度4——东非(EA)市场可及性

Can the client actually buy, trial, and use this tool from Uganda or East Africa?
ScoreCriterion
5Free tier adequate for Step 2 experimentation; no payment required
4Affordable paid tier accessible via USD card, MTN MoMo, or Airtel Money
3Paid tool; USD card required; price is reasonable once converted to UGX
2Expensive or requires payment method unavailable to most EA clients
1No EA payment method accepted and no adequate free tier
Always convert the pricing to UGX using the current approximate rate and state it explicitly. Note EAT (UTC+3) customer support availability if known — not a scoring criterion but record it as context.

客户能否在乌干达或东非(EA)地区实际购买、试用和使用该工具?
评分标准
5免费版足以支持第2阶段实验;无需付费
4可负担的付费版支持通过USD卡、MTN MoMo或Airtel Money支付
3付费工具;需要USD卡;换算为UGX后价格合理
2价格昂贵或需要大多数东非(EA)客户无法使用的支付方式
1不接受任何东非(EA)支付方式且无合适的免费版
请始终使用当前近似汇率将价格换算为UGX并明确说明。若已知EAT(UTC+3)时区的客户支持可用性,请记录为附加信息(非评分标准)。

Factor 5 — Team Capability Match

维度5——团队能力匹配度

Can the client's team use this without specialist skills or paid training?
ScoreCriterion
5No-code; self-onboarding in under 1 hour; free tutorials available
4Minimal onboarding; free documentation or YouTube training adequate
3Some training required; free resources available but take meaningful time
2Paid training or certification required for effective use
1Requires a data scientist, developer, or specialist to operate

客户团队能否无需专业技能或付费培训即可使用该工具?
评分标准
5无代码;1小时内即可完成自主入门;提供免费教程
4入门难度低;免费文档或YouTube培训足以满足需求
3需要一些培训;有免费资源但需花费一定时间学习
2需要付费培训或认证才能有效使用
1需要数据科学家、开发人员或专业人员操作

Factor 6 — Output Quality

维度6——输出质量

Based on trial, demo, or available samples: is the AI output usable for the client's stated marketing task?
ScoreCriterion
5Output usable with minor edits; passes the
ai-content-humaniser
human voice checklist
4Output usable after moderate editing; tone and accuracy are sound
3Output requires significant editing but the structure and substance are correct
2Output is often inaccurate, generic, or off-brand; editing burden is high
1Raw output is unusable; would mislead clients or embarrass the business
Apply the
ai-content-humaniser
standard when evaluating any tool that generates written content. If a trial is not possible before scoring, note this as a limitation and flag that output quality must be verified before the Step 2 experiment launches.

基于试用、演示或可用样本:AI输出是否适用于客户指定的营销任务?
评分标准
5输出只需少量编辑即可使用;符合
ai-content-humaniser
的人类语气检查清单
4输出经过适度编辑后可使用;语气和准确性良好
3输出需要大量编辑但结构和内容准确
2输出经常不准确、通用或不符合品牌调性;编辑工作量大
1原始输出无法使用;会误导客户或损害企业形象
在评估任何生成书面内容的工具时,请应用
ai-content-humaniser
标准。若无法在评分前进行试用,请注明此限制并预警:在第2阶段实验启动前必须验证输出质量。

Factor 7 — Vendor Stability

维度7——供应商稳定性

Is this a vendor the client can rely on for at least 12 months?
ScoreCriterion
5Established company; 2+ years of public operation; active product updates in last 6 months; public roadmap
41–2 years old; funded; active updates; no public roadmap but product is clearly maintained
3Well-known product but recent changes (acquisition, pivot, rebranding) introduce some uncertainty
2Early-stage startup; product may change significantly; limited track record
1No verifiable track record; tool may not exist in 12 months
Assess honestly. Do not recommend a tool with a score of 1 or 2 on this factor unless the client has technical capacity to migrate quickly and the tool cost is zero.

该供应商能否让客户依赖至少12个月?
评分标准
5成熟企业;公开运营2年以上;过去6个月内有活跃的产品更新;有公开路线图
4成立1-2年;有融资;产品更新活跃;无公开路线图但明显在维护
3知名产品但近期有变动(被收购、战略转型、品牌重塑),存在一定不确定性
2早期初创企业;产品可能发生重大变化;业绩记录有限
1无可验证的业绩记录;12个月内工具可能不复存在
请如实评估。除非客户具备快速迁移的技术能力且工具成本为零,否则请勿推荐此维度评分为1或2的工具。

Factor 8 — Total Cost of Ownership

维度8——总拥有成本

What is the real monthly cost once the trial period ends?
ScoreCriterion
5Transparent pricing under UGX 500,000/month; no per-seat or overage surprises
4UGX 500,000–1,000,000/month; pricing is clear; no hidden fees
3UGX 1,000,000–2,500,000/month; pricing is clear but stretches most EA budgets
2Expensive or opaque; per-seat, per-usage, or overage fees likely to exceed stated price
1Very expensive (above UGX 2,500,000/month) or pricing is deliberately obscured
Always itemise: base plan cost, per-seat fees if any, usage limits and overage rates, annual vs monthly billing difference, and the total estimated monthly cost in UGX. Use the client's stated budget as the benchmark.

试用结束后的实际月度成本是多少?
评分标准
5定价透明,每月低于500,000 UGX;无按席位或超额使用的隐性费用
4每月500,000–1,000,000 UGX;定价清晰;无隐藏费用
3每月1,000,000–2,500,000 UGX;定价清晰但超出大多数东非(EA)客户预算
2价格昂贵或定价不透明;按席位、按使用量或超额使用费用可能超过标价
1价格极高(每月超过2,500,000 UGX)或定价故意模糊
请逐项列明:基础套餐费用、按席位收费(如有)、使用限制和超额费率、年度与月度计费差异,以及以UGX计算的预估月度总成本。以客户声明的预算为基准。

Scoring and Decision Rules

评分与决策规则

Sum the 8 factor scores for a total out of 40. Apply the following decision thresholds:
Total ScoreDecision
32–40Recommended — proceed to Step 2 experiment
24–31Conditional — address named gaps before committing; note which factors to re-assess
Below 24Deferred — find a better-fit tool; name the reason and the alternative
Every deferred tool must include: (a) the primary reason for deferral stated in one sentence, and (b) a named alternative tool to evaluate in its place.

将8个维度的得分相加,总分满分为40分。应用以下决策阈值:
总分决策
32–40推荐 —— 进入第2阶段实验
24–31有条件推荐 —— 在投入前解决指定差距;注明需重新评估的维度
低于24暂缓 —— 寻找更适配的工具;说明原因并提供替代工具
每个暂缓的工具必须包含:(a) 一句话说明暂缓的主要原因;(b) 一个指定的替代评估工具。

Output Structure

输出结构

Produce all five sections below. Do not omit any section.

生成以下全部5个章节,请勿遗漏任何章节。

Section 1 — Tool Evaluation Scorecards

章节1——工具评估评分卡

One scorecard per tool. Use this format for each:
undefined
每个工具对应一张评分卡。每个评分卡使用以下格式:
undefined

[Tool Name]

[工具名称]

Use case being evaluated: [restate the client's named marketing problem]
FactorScore (/5)Notes
1. Use Case Fit
2. Data Requirements
3. Integration Compatibility
4. EA Market Accessibility
5. Team Capability Match
6. Output Quality
7. Vendor Stability
8. Total Cost of Ownership
Total/40
Decision: Recommended / Conditional / Deferred
Key strengths:
  • [Point 1]
  • [Point 2]
Key concerns:
  • [Point 1]
  • [Point 2]
PDPA 2019 flag: [Yes — describe the specific data concern / No]

---
评估用例: [重申客户指定的营销问题]
维度评分(/5)备注
1. 用例适配度
2. 数据要求
3. 集成兼容性
4. 东非(EA)市场可及性
5. 团队能力匹配度
6. 输出质量
7. 供应商稳定性
8. 总拥有成本
总分/40
决策: 推荐 / 有条件推荐 / 暂缓
核心优势:
  • [要点1]
  • [要点2]
核心顾虑:
  • [要点1]
  • [要点2]
PDPA 2019预警: [是——描述具体数据顾虑 / 否]

---

Section 2 — Recommended Shortlist

章节2——推荐候选清单

List all tools scoring 24 or above. For conditional tools, state explicitly which factor(s) must be addressed and how before the experiment launches.

列出所有评分24分及以上的工具。对于有条件推荐的工具,请明确说明在实验启动前必须解决的维度及解决方式。

Section 3 — Deferred Tools

章节3——暂缓工具清单

List all tools scoring below 24. For each: one-sentence reason for deferral and one named alternative tool.

列出所有评分低于24分的工具。每个工具需包含:一句话暂缓原因和一个指定的替代工具。

Section 4 — 30-Day Experiment Briefs

章节4——30天实验简报

Produce one experiment brief for every recommended tool (score 24+). Use this format:
undefined
为每个推荐工具(评分24+)生成一份实验简报。使用以下格式:
undefined

30-Day Experiment Brief — [Tool Name]

30天实验简报——[工具名称]

Hypothesis: If we use [tool name] for [specific task], we expect [measurable result] within 30 days.
Baseline metric: [What is the current state before the tool? Give a number or describe how to establish one in Week 1.]
Success metric: [The specific, measurable target at Day 30. Apply SMART criteria: Specific, Measurable, Achievable, Relevant, Time-bound.]
Week 1 — Setup and baseline
  • Actions: [what to do]
  • Review: [what to check]
Week 2 — First outputs
  • Actions: [what to do]
  • Review: [what to check]
Week 3 — Iteration
  • Actions: [what to do]
  • Review: [what to check]
Week 4 — Evaluate
  • Actions: [what to do]
  • Review: [what to check]
Day 30 Go/No-Go decision criteria:
  • Go: [specific condition that justifies continuing and paying for the tool]
  • No-Go: [specific condition that means the experiment failed; state what happens next]

---
假设: 如果我们使用[工具名称]完成[具体任务],预计30天内可实现[可衡量结果]。
基线指标: [使用工具前的现状?给出数值或说明第1周如何建立基线。]
成功指标: [第30天的具体可衡量目标。遵循SMART标准:具体、可衡量、可实现、相关性、时限性。]
第1周——设置与基线建立
  • 行动:[需执行的操作]
  • 审核:[需检查的内容]
第2周——首次输出
  • 行动:[需执行的操作]
  • 审核:[需检查的内容]
第3周——迭代优化
  • 行动:[需执行的操作]
  • 审核:[需检查的内容]
第4周——效果评估
  • 行动:[需执行的操作]
  • 审核:[需检查的内容]
第30天是否继续决策标准:
  • 继续:[证明值得继续使用并付费的具体条件]
  • 终止:[表明实验失败的具体条件;说明后续行动]

---

Section 5 — Budget Summary

章节5——预算汇总

Produce a single budget table covering all evaluated tools:
| Tool | Free Tier Adequate? | Monthly Cost (USD) | Monthly Cost (UGX) | Decision |
|------|--------------------|--------------------|---------------------|----------|
| | | | | |
State the USD/UGX conversion rate used. State the client's declared budget and whether the recommended shortlist is within budget. If the shortlist exceeds budget, recommend which single tool to start with and why.

生成涵盖所有评估工具的预算表格:
| 工具 | 免费版是否够用? | 月度成本(USD) | 月度成本(UGX) | 决策 |
|------|--------------------|--------------------|---------------------|----------|
| | | | | |
说明所使用的USD/UGX汇率。 说明客户声明的预算,以及推荐候选清单是否在预算范围内。 若候选清单超出预算,请推荐优先使用的单个工具并说明原因。

EA-Specific Evaluation Notes

东非(EA)专属评估说明

Apply these rules throughout the evaluation:
  • Prioritise tools with free tiers. EA clients at Step 2 should not pay for a tool until a 30-day experiment has demonstrated measurable value.
  • For any WhatsApp or SMS automation use case, include Africa's Talking in the evaluation automatically, even if the client did not name it. Note it as a recommended addition.
  • Flag any tool that requires a developer, API key setup, or command-line access unless the client confirmed they have technical support.
  • Flag any tool that requires high or consistent bandwidth for field team usage. Prefer offline-capable or low-bandwidth modes where field staff are expected to use the tool.
  • Convert all USD pricing to UGX. Use the approximate rate at time of evaluation and state it explicitly.
  • Do not recommend any tool that fails the Factor 4 (EA Market Accessibility) assessment unless the client has explicitly confirmed they can access it and pay for it.

在整个评估过程中应用以下规则:
  • 优先选择提供免费版的工具。处于第2阶段的东非(EA)客户在30天实验证明可衡量价值前,不应为工具付费。
  • 对于任何WhatsApp或SMS自动化用例,请自动将Africa's Talking纳入评估,即使客户未提及该工具。注明其为推荐补充工具。
  • 若工具需要开发人员、API密钥设置或命令行访问权限,请标记该情况,除非客户确认拥有技术支持。
  • 若工具需要高带宽或稳定带宽供外勤团队使用,请标记该情况。若外勤人员将使用工具,优先选择支持离线或低带宽模式的工具。
  • 将所有USD定价换算为UGX。使用评估时的近似汇率并明确说明。
  • 除非客户明确确认可获取并支付该工具,否则请勿推荐在维度4(东非(EA)市场可及性)评估中不合格的工具。

Cross-References

交叉引用

SkillWhen to use it
ai-readiness-diagnostic
Run before this skill; confirms the client is at Canvas Step 2
meta-ai-tools-audit
Reference catalogue; use when client does not have a shortlist yet
ai-content-humaniser
Apply to evaluate output quality for any content-generation tool
playbook-ai-automation-workflow
Use after a tool is selected to build the automation workflow

技能使用时机
ai-readiness-diagnostic
在本技能之前运行;确认客户处于画布第2阶段
meta-ai-tools-audit
参考目录;当客户暂无候选清单时使用
ai-content-humaniser
用于评估任何内容生成工具的输出质量
playbook-ai-automation-workflow
工具选定后用于构建自动化工作流程

Tool Categories Reference

工具类别参考

Use this section when the client has no shortlist and needs guidance on which category of AI tool to evaluate. Cross-reference with
meta-ai-tools-audit
for the full catalogue.

当客户暂无候选清单且需要指导评估哪类AI工具时,使用本节内容。与
meta-ai-tools-audit
交叉引用以获取完整目录。

Category 1: RAG (Retrieval-Augmented Generation) Tools

类别1:RAG(检索增强生成)工具

Tools that connect LLMs to client-specific knowledge bases for accurate, on-brand output:
ToolDescriptionEA accessibilityApprox. cost
Claude ProjectsUpload documents; persistent context per projectYes — browser-basedIncluded in Claude Pro (~$20/month USD)
ChatGPT ProjectsUpload documents; persistent context per projectYes — browser-basedIncluded in ChatGPT Plus (~$20/month USD)
CustomGPT.aiBuild custom knowledge bases with API accessYes — cloud-basedFrom $49/month USD
Notion AIRAG within Notion workspaceYes — cloud-basedFrom $10/month USD
Mem.aiAI-powered knowledge managementYes — cloud-basedFree tier available
Evaluation criteria: How easily can client documents be uploaded? Does the tool maintain source attribution? Can multiple team members access the same knowledge base?

将大语言模型(LLM)与客户专属知识库对接,以生成准确、符合品牌调性的输出的工具:
工具描述东非(EA)可及性预估成本
Claude Projects上传文档;每个项目拥有持久上下文是——基于浏览器包含在Claude Pro中(约20美元/月)
ChatGPT Projects上传文档;每个项目拥有持久上下文是——基于浏览器包含在ChatGPT Plus中(约20美元/月)
CustomGPT.ai构建带API访问权限的自定义知识库是——基于云起价49美元/月
Notion AI在Notion工作区内实现RAG是——基于云起价10美元/月
Mem.ai基于AI的知识管理工具是——基于云提供免费版
评估标准: 客户文档上传是否便捷?工具是否保留来源归因?多个团队成员能否访问同一知识库?

Category 2: Synthetic Research and Persona Tools

类别2:合成研究与用户画像工具

Tools for generating AI-simulated audience research when primary fieldwork is unavailable or too costly:
ToolDescriptionEA accessibilityApprox. cost
Supernatural AISynthetic user personas for brand researchLimited — US-focusedEnterprise pricing
GlimpseAI consumer research and audience analysisYes — cloud-basedFrom $99/month USD
Synthetic UsersSimulated user testing and focus groupsYes — cloud-basedFrom $29/month USD
Claude/ChatGPT (prompted)Structured persona generation via promptsYesIncluded in existing subscription
EA note: For most Ugandan SME clients, prompted persona generation via Claude or ChatGPT is the most accessible option. Supernatural AI and Glimpse are better suited to multinational clients with larger research budgets.

当无法开展实地调研或成本过高时,用于生成AI模拟受众研究的工具:
工具描述东非(EA)可及性预估成本
Supernatural AI用于品牌研究的合成用户画像有限——聚焦美国市场企业级定价
GlimpseAI消费者研究与受众分析是——基于云起价99美元/月
Synthetic Users模拟用户测试与焦点小组是——基于云起价29美元/月
Claude/ChatGPT(通过提示词)通过提示词生成结构化用户画像包含在现有订阅中
东非(EA)说明: 对于大多数乌干达中小企业客户,通过Claude或ChatGPT生成提示词来创建用户画像是最易获取的选项。Supernatural AI和Glimpse更适合拥有较大研究预算的跨国客户。

Category 3: Agentic AI and Automation Tools

类别3:智能体AI与自动化工具

Tools for building autonomous or semi-autonomous marketing agents:
ToolDescriptionEA accessibilityApprox. cost
PersadoAI language optimisation for copy and adsLimited — enterpriseEnterprise pricing
OfferFitAI experimentation platform for retention campaignsLimited — enterpriseEnterprise pricing
BrazeAI-powered customer engagement platformYes — cloud-basedEnterprise pricing
n8nOpen-source workflow automation (self-hostable)Yes — can self-hostFree (self-hosted)
Zapier AIAI-enhanced workflow automationYes — cloud-basedFree tier; from $19.99/month USD
Make.comVisual workflow builder with AI stepsYes — cloud-basedFree tier; from $9/month USD
Claude APIBuild custom agents and automationsYes — API accessPay-per-token
EA recommendation: n8n (self-hosted on a local server) combined with the Claude API is the most cost-effective agentic stack for EA-based consultancies. Zapier is the most accessible for clients with no technical resources.

用于构建自主或半自主营销智能体的工具:
工具描述东非(EA)可及性预估成本
Persado用于文案和广告的AI语言优化工具有限——企业级企业级定价
OfferFit用于留存活动的AI实验平台有限——企业级企业级定价
Braze基于AI的客户互动平台是——基于云企业级定价
n8n开源工作流自动化工具(可自行部署)是——可自行部署免费(自行部署)
Zapier AI增强AI功能的工作流自动化工具是——基于云提供免费版;起价19.99美元/月
Make.com带AI步骤的可视化工作流构建器是——基于云提供免费版;起价9美元/月
Claude API构建自定义智能体与自动化流程是——API访问按令牌付费
东非(EA)推荐: n8n(在本地服务器自行部署)结合Claude API是东非(EA)咨询公司最具成本效益的智能体技术栈。Zapier是无技术资源客户的最易获取选项。

Quality Criteria

质量标准

  • All 8 factors scored for every tool with written notes — no blank cells, no partial scorecards.
  • EA accessibility assessed explicitly: name the payment method, name the pricing tier, state the UGX equivalent.
  • Every recommended tool (score 24+) is paired with a complete 30-day experiment brief including a measurable hypothesis and Day 30 Go/No-Go criteria.
  • Every deferred tool includes a one-sentence reason and a named alternative.
  • The Uganda Data Protection and Privacy Act 2019 is flagged wherever the tool collects, processes, or transfers personal data — even when the overall score is high.
  • The budget summary is produced in UGX, references the client's declared budget, and resolves conflicts where the shortlist exceeds budget.
  • Output is a decision document a non-technical business owner can act on without further interpretation.
  • Vendor stability is assessed honestly: do not recommend a tool scoring 1 or 2 on Factor 7 without explicitly noting the risk and the mitigation.

  • 为每个工具的所有8个维度评分并撰写备注——无空白单元格,无部分评分卡。
  • 明确评估东非(EA)可及性:注明支付方式、定价层级、UGX等价金额。
  • 每个推荐工具(评分24+)都配有完整的30天实验简报,包含可衡量假设和第30天是否继续的决策标准。
  • 每个暂缓工具都包含一句话原因和一个指定替代工具。
  • 凡工具收集、处理或传输个人数据的情况,均标记《2019年乌干达数据保护与隐私法案》相关预警——即使总分较高。
  • 预算汇总以UGX呈现,参考客户声明的预算,并解决候选清单超出预算的冲突。
  • 输出为非技术企业主无需进一步解读即可采取行动的决策文档。
  • 如实评估供应商稳定性:若推荐维度7评分为1或2的工具,需明确注明风险及缓解措施。

References

参考资料

  • Venkatesan, R. and Lecinski, J. (2026) The AI Marketing Canvas, 2nd ed. Stanford Business Books.
  • Sweenor, D. and Mulkers, T. (2024) AI-Powered Business Intelligence. O'Reilly Media.
  • Nayebi, H. (2025) Generative AI for Product and Marketing Teams. Packt Publishing.
  • Bodnar, K. and Cohen, J. (2012) The B2B Social Media Book. Wiley.
  • Chaffey, D. (2024) Digital Marketing: Strategy, Implementation and Practice. Pearson.
  • Venkatesan, R. 和 Lecinski, J.(2026)《AI营销画布》,第二版,斯坦福商业图书。
  • Sweenor, D. 和 Mulkers, T.(2024)《AI驱动的商业智能》,奥莱利媒体。
  • Nayebi, H.(2025)《面向产品与营销团队的生成式AI》,Packt出版社。
  • Bodnar, K. 和 Cohen, J.(2012)《B2B社交媒体指南》,威利出版社。
  • Chaffey, D.(2024)《数字营销:战略、实施与实践》,培生出版社。