sales-treasuredata

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Treasure Data Platform Help

Treasure Data平台帮助

Step 1 — Gather context

步骤1 — 收集上下文

If
references/learnings.md
exists, read it first for accumulated platform knowledge.
What do you need help with?
  • A. Initial setup — implementation, onboarding, connecting first data sources
  • B. Profile unification — identity resolution, merging customer records, parent/child tables
  • C. Audience segmentation — building segments, no-code Audience Studio, SQL-based segments
  • D. Connectors & integrations — configuring import/export connectors, Integration Hub, 400+ sources
  • E. Journey orchestration — customer journeys, triggered campaigns, activation workflows
  • F. AI Marketing Cloud — AI Suites (Engagement, Personalization, Creative, Paid Media, Service)
  • G. Agent Hub & Treasure Code — AI agents, Marketing Super Agent, agent development
  • H. API & SDK — TD API, Audience API, Postback API, LLM API, client SDKs
  • I. SQL & queries — Presto/Trino queries, job management, query optimization
  • J. Workflow scheduling — Treasure Workflows, DAGs, scheduling jobs, CI/CD
  • K. Pricing & plans — Intelligent CDP vs AI Marketing Cloud, "No Compute" pricing, Trade-Up Program
  • L. Choosing vs competitors — Treasure Data vs Segment, Tealium, Amperity, Hightouch, RudderStack
  • M. Other
Skip-ahead rule: if the user's prompt already contains enough context, skip to Step 2.
如果存在
references/learnings.md
文件,请先阅读以获取积累的平台知识。
您需要哪方面的帮助?
  • A. 初始设置 — 实施、入门对接、连接首个数据源
  • B. 档案统一 — 身份解析、合并客户记录、父子表配置
  • C. 受众细分 — 构建细分群体、无代码Audience Studio、基于SQL的细分
  • D. 连接器与集成 — 配置导入/导出连接器、Integration Hub、400+数据源
  • E. 旅程编排 — 客户旅程、触发式营销活动、激活工作流
  • F. AI营销云 — AI套件(互动、个性化、创意、付费媒体、服务)
  • G. Agent Hub与Treasure Code — AI Agent、营销超级Agent、Agent开发
  • H. API与SDK — TD API、受众API、Postback API、LLM API、客户端SDK
  • I. SQL与查询 — Presto/Trino查询、作业管理、查询优化
  • J. 工作流调度 — Treasure Workflows、DAG、作业调度、CI/CD
  • K. 定价与方案 — 智能CDP vs AI营销云、“无计算”定价、升级计划
  • L. 竞品对比 — Treasure Data vs Segment、Tealium、Amperity、Hightouch、RudderStack
  • M. 其他
跳步规则:如果用户的提示已包含足够上下文,直接进入步骤2。

Step 2 — Route or answer directly

步骤2 — 转派或直接解答

If the question is about...Route to...
CRM data deduplication or quality outside TD
/sales-data-hygiene [question]
Retargeting/remarketing strategy
/sales-retargeting [question]
Connecting TD to other tools (general integration strategy)
/sales-integration [question]
Contact/company enrichment
/sales-enrich [question]
Lead scoring models
/sales-lead-score [question]
Buying intent signals
/sales-intent [question]
Email campaign strategy
/sales-email-marketing [question]
When routing to another skill, provide the exact command: "This is a {problem domain} question — run:
/sales-{skill} {user's original question}
"
如果问题涉及...转至...
TD之外的CRM数据去重或质量问题
/sales-data-hygiene [问题]
重定向/再营销策略
/sales-retargeting [问题]
TD与其他工具的连接(通用集成策略)
/sales-integration [问题]
联系人/企业信息补全
/sales-enrich [问题]
线索评分模型
/sales-lead-score [问题]
购买意向信号
/sales-intent [问题]
电子邮件营销活动策略
/sales-email-marketing [问题]
转派至其他技能时,请提供准确命令:“这属于{问题领域}问题——执行:
/sales-{skill} {用户原始问题}

Step 3 — Treasure Data platform reference

步骤3 — Treasure Data平台参考

Read
references/platform-guide.md
for the full platform reference — modules, pricing, integrations, data model, workflows, regional endpoints.
If the question involves the API, also read
references/treasuredata-api-reference.md
.
Answer the user's question using only the relevant section. Don't dump the full reference.
**阅读
references/platform-guide.md
**获取完整平台参考——模块、定价、集成、数据模型、工作流、区域端点。
如果问题涉及API,还需阅读
references/treasuredata-api-reference.md
仅使用相关章节内容解答用户问题,不要直接输出完整参考文档。

Step 4 — Actionable guidance

步骤4 — 可落地指导

You no longer need the platform guide — focus on the user's specific situation.
  • Start with the simplest approach — use Audience Studio UI before writing SQL, use pre-built connectors before custom scripts
  • Check regional endpoints — TD has separate API base URLs for US, EU, Japan, and Korea
  • Test in QA sandbox first — production changes to parent tables or identity rules affect all downstream segments
  • Monitor job queue — Presto/Trino jobs share compute; large queries can block others
  • Use Treasure Workflows for orchestration — don't schedule individual jobs when a DAG handles dependencies
If you discover a gotcha, workaround, or tip not covered in
references/learnings.md
, append it there.
无需再依赖平台指南——聚焦用户具体场景。
  • 从最简方案入手 — 先使用Audience Studio UI再编写SQL,先使用预构建连接器再编写自定义脚本
  • 检查区域端点 — TD为美国、欧盟、日本、韩国提供独立的API基础URL
  • 先在QA沙箱测试 — 对父表或身份规则的生产环境变更会影响所有下游细分群体
  • 监控作业队列 — Presto/Trino作业共享计算资源;大型查询可能阻塞其他作业
  • 使用Treasure Workflows进行编排 — 当DAG可处理依赖关系时,不要单独调度单个作业
如果发现
references/learnings.md
未覆盖的陷阱、解决方案或技巧,请追加到该文件中。

Gotchas

常见陷阱

Best-effort from research — review these, especially items about plan-gated features and integration gotchas that may be outdated.
  • SQL required for many workflows — Audience Studio handles basic segments, but complex transformations, custom enrichment, and advanced queries need Presto/Trino SQL. Non-technical marketers will need analyst support.
  • Postback API is case-sensitive — column names in payload must match table schema exactly (case-sensitive). Mismatched casing silently drops data.
  • Legacy compute engine contention — Presto/Hive jobs share resources. Large queries can queue behind others. Schedule heavy jobs during off-peak hours.
  • Profile API refresh lag — unified profiles don't update instantly. Allow time for identity resolution jobs to complete before querying the Profiles API.
  • Implementation timeline — typical deployment takes 8-12 weeks with implementation partner. Budget $30K-$100K+ for implementation costs on top of licensing.
  • "No Compute" pricing — charges are based on unified profiles and events, not queries. But profile count can grow unexpectedly if identity resolution rules are too loose.
  • Regional endpoint mismatch — using the wrong region's API URL returns auth errors, not a helpful redirect. Double-check your site (US/EU/JP/KR) in console settings.
  • Add-on costs — AI Marketing Cloud suites (Engagement, Personalization, Creative, Paid Media, Service) are separate fixed-annual + consumption-based licenses on top of the CDP.
基于研究的最佳实践——请重点关注与方案受限功能和集成陷阱相关的内容,这些内容可能已过时。
  • 许多工作流需要SQL支持 — Audience Studio可处理基础细分,但复杂转换、自定义补全和高级查询需要Presto/Trino SQL。非技术营销人员需要分析师支持。
  • Postback API区分大小写 — 负载中的列名必须与表架构完全匹配(区分大小写)。大小写不匹配会导致数据被静默丢弃。
  • 旧版计算引擎资源竞争 — Presto/Hive作业共享资源。大型查询可能排在其他作业之后。请在非高峰时段调度重型作业。
  • Profile API刷新延迟 — 统一档案不会即时更新。在查询Profiles API前,需等待身份解析作业完成。
  • 实施周期 — 借助实施合作伙伴,典型部署需要8-12周。除许可费用外,实施成本预算为3万-10万美元以上。
  • “无计算”定价 — 费用基于统一档案和事件,而非查询。但如果身份解析规则过于宽松,档案数量可能意外增长。
  • 区域端点不匹配 — 使用错误区域的API URL会返回认证错误,而非有用的重定向信息。请在控制台设置中仔细检查您的站点(美国/欧盟/日本/韩国)。
  • 附加成本 — AI营销云套件(互动、个性化、创意、付费媒体、服务)是独立的固定年费+基于使用量的许可,需在CDP基础上额外付费。

Related skills

相关技能

  • /sales-cdp
    — CDP comparison and selection strategy across Tealium, Segment, BlueConic, mParticle, Treasure Data
  • /sales-tealium
    — Tealium CDP — Real-Time CDP, identity resolution, 1300+ connectors
  • /sales-blueconic
    — BlueConic CDP — profile unification, segmentation, audience activation (mid-market alternative)
  • /sales-data-hygiene
    — CRM data quality, deduplication, normalization
  • /sales-retargeting
    — Retargeting and remarketing strategy across ad platforms
  • /sales-integration
    — Connecting sales tools with webhooks, APIs, Zapier, Make
  • /sales-enrich
    — Contact and company enrichment across providers
  • /sales-intent
    — Buying intent signals and prioritization
  • /sales-lead-score
    — Lead scoring models across platforms
  • /sales-do
    — Not sure which skill to use? The router matches any sales objective to the right skill. Install:
    npx skills add sales-skills/sales --skill sales-do
  • /sales-cdp
    — 跨Tealium、Segment、BlueConic、mParticle、Treasure Data的CDP对比与选择策略
  • /sales-tealium
    — Tealium CDP——实时CDP、身份解析、1300+连接器
  • /sales-blueconic
    — BlueConic CDP——档案统一、细分、受众激活(中端市场替代方案)
  • /sales-data-hygiene
    — CRM数据质量、去重、标准化
  • /sales-retargeting
    — 跨广告平台的重定向与再营销策略
  • /sales-integration
    — 通过webhook、API、Zapier、Make连接销售工具
  • /sales-enrich
    — 跨服务商的联系人与企业信息补全
  • /sales-intent
    — 购买意向信号与优先级排序
  • /sales-lead-score
    — 跨平台的线索评分模型
  • /sales-do
    — 不确定使用哪个技能?该路由可将任何销售目标匹配到合适的技能。安装:
    npx skills add sales-skills/sales --skill sales-do

Examples

示例

User prompt: "Our customer profiles are fragmented — website visitors tracked separately from email subscribers and in-store purchases. How do I unify them in Treasure Data?"
Response covers: parent table setup, identity resolution rules (deterministic matching on email/phone, probabilistic on device IDs), data source priority configuration, testing unification in QA sandbox before production.
User prompt: "I need to build an audience of high-value customers who haven't purchased in 90 days and push them to Facebook Custom Audiences"
Response covers: SQL segment definition using purchase history and recency, Audience Studio segment creation, Facebook Custom Audiences connector configuration, sync frequency and match rate expectations.
User prompt: "Our Treasure Data implementation is $200K/year and leadership wants to know if we're getting value. What should I measure?"
Response covers: profile unification rate, audience activation volume, campaign lift from CDP-powered segments vs non-CDP, time-to-insight reduction, connector utilization across the 400+ available, "No Compute" pricing optimization.
用户提示:“我们的客户档案分散——网站访客、电子邮件订阅者和到店购买记录分别追踪。如何在Treasure Data中统一这些档案?”
回复内容包括:父表设置、身份解析规则(基于邮箱/电话的确定性匹配、基于设备ID的概率性匹配)、数据源优先级配置、在QA沙箱测试统一后再应用到生产环境。
用户提示:“我需要构建一个90天未购买的高价值客户受众群体,并推送至Facebook自定义受众”
回复内容包括:使用购买历史和近期消费行为定义SQL细分群体、Audience Studio细分群体创建、Facebook自定义受众连接器配置、同步频率和匹配率预期。
用户提示:“我们的Treasure Data实施成本为每年20万美元,管理层想知道是否获得了相应价值。我应该衡量哪些指标?”
回复内容包括:档案统一率、受众激活量、CDP驱动细分群体与非CDP驱动群体的营销活动效果提升、洞察获取时间缩短、400+可用连接器的使用率、“无计算”定价优化。

Troubleshooting

故障排查

Profiles not merging correctly
  • Check identity resolution rules in parent table configuration — are you matching on the right identifiers (email, phone, customer ID)?
  • Verify data source priority order — conflicting values resolve based on source ranking
  • Run a test unification on a small dataset in QA sandbox before applying to production
Connector sync failing or data not appearing
  • Verify connector credentials haven't expired (especially OAuth tokens for Salesforce, Google)
  • Check the job log in TD Console for specific error messages
  • Confirm the regional API endpoint matches your TD site (US vs EU vs JP vs KR)
  • For Postback API: verify column name casing matches the target table schema exactly
Queries running slowly or timing out
  • Check the job queue — other Presto/Trino jobs may be consuming shared compute
  • Optimize SQL: avoid
    SELECT *
    , use
    WHERE
    clauses to reduce scan scope, partition large tables by time
  • Schedule heavy analytical queries during off-peak hours
  • Consider Treasure Workflows to chain dependent queries instead of running sequentially
档案未正确合并
  • 检查父表配置中的身份解析规则——是否匹配了正确的标识符(邮箱、电话、客户ID)?
  • 验证数据源优先级顺序——冲突值会根据源排名进行解析
  • 在应用到生产环境前,先在QA沙箱中对小数据集进行统一测试
连接器同步失败或数据未显示
  • 验证连接器凭证未过期(尤其是Salesforce、Google的OAuth令牌)
  • 在TD控制台的作业日志中查看具体错误信息
  • 确认区域API端点与您的TD站点匹配(美国vs欧盟vs日本vs韩国)
  • 对于Postback API:验证列名大小写与目标表架构完全匹配
查询运行缓慢或超时
  • 检查作业队列——其他Presto/Trino作业可能占用了共享计算资源
  • 优化SQL:避免
    SELECT *
    ,使用
    WHERE
    子句减少扫描范围,按时间分区大型表
  • 在非高峰时段调度重型分析查询
  • 考虑使用Treasure Workflows链式执行依赖查询,而非按顺序运行