eventhouse-authoring-cli
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseUpdate Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
skill.check-updates- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
更新检查 — 每个会话一次(必填) 在会话中首次使用此技能时,请先运行check-updates技能再继续。
- GitHub Copilot CLI / VS Code:调用
技能。check-updates- Claude Code / Cowork / Cursor / Windsurf / Codex:比较本地与远程package.json版本。
- 若本次会话中已执行过该检查,可跳过。
重要说明
- 从工作区名称查找工作区详情(包括其ID):列出所有工作区,然后使用JMESPath过滤
- 从工作区ID、项类型和项名称查找项详情(包括其ID):列出该工作区中该类型的所有项,然后使用JMESPath过滤
eventhouse-authoring-cli — Eventhouse Authoring and Management via CLI
eventhouse-authoring-cli — 通过CLI进行Eventhouse创作与管理
Table of Contents
目录
| Task | Reference | Notes |
|---|---|---|
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for workspace/item ID resolution] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | Hierarchy, Finding Things in Fabric |
| Environment URLs | COMMON-CORE.md § Environment URLs | KQL Cluster URI, KQL Ingestion URI |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; KQL audience: |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | List Workspaces, List Items, Item Creation |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | |
Fabric Control-Plane API via | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via | COMMON-CLI.md § OneLake Data Access via curl | Use |
| SQL / TDS Data-Plane Access | COMMON-CLI.md § SQL / TDS Data-Plane Access | |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | |
Quick Reference: | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which |
| Authoring Capability Matrix | EVENTHOUSE-AUTHORING-CORE.md § Authoring Capability Matrix | Read first — KQL Database vs Shortcut (read-only); connection requires Admin/Ingestor role |
| Table Management and Schema Evolution | EVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema Evolution | Create Table, Create-Merge (idempotent), Alter / Rename / Drop, Schema Evolution (Rename, Swap/Blue-Green) |
| Ingestion and Data Mappings | EVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data Mappings | Inline, Set-or-Append/Replace, From Storage, Streaming, Data Mappings (CSV, JSON) |
| Policies | EVENTHOUSE-AUTHORING-CORE.md § Policies | Retention, Caching, Partitioning, Merge |
| Materialized Views | EVENTHOUSE-AUTHORING-CORE.md § Materialized Views | Create, Alter, Lifecycle, Supported aggregations |
| Stored Functions and Update Policies | EVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update Policies | Stored Functions, Update Policies (auto-transform on ingestion) |
| External Tables | EVENTHOUSE-AUTHORING-CORE.md § External Tables | OneLake / ADLS External Table, Query External Table |
| Permission Model | EVENTHOUSE-AUTHORING-CORE.md § Permission Model | Database Roles, Grant Permissions |
| Authoring Gotchas and Troubleshooting | EVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference | 10 numbered issues with cause + fix |
| Bash Templates | authoring-script-templates.md § Bash Templates | Create Table + Ingest, Schema Deployment, Export Schema, Set Retention/Caching |
| PowerShell Templates | authoring-script-templates.md § PowerShell Templates | Create Table + Ingest, Schema Deployment |
| Tool Stack | SKILL.md § Tool Stack | |
| Connection | SKILL.md § Connection | |
| Authoring Scope | SKILL.md § Authoring Scope | |
| Execute KQL Command | SKILL.md § Execute KQL Command | |
| Table Management via CLI | SKILL.md § Table Management via CLI | Create Table, Add Column, Drop Table |
| Data Ingestion via CLI | SKILL.md § Data Ingestion via CLI | Inline, From Storage, From OneLake, Set-or-Append |
| Policies via CLI | SKILL.md § Policies via CLI | Retention, Caching, Streaming Ingestion |
| Materialized Views via CLI | SKILL.md § Materialized Views via CLI | |
| Functions and Update Policies via CLI | SKILL.md § Functions and Update Policies via CLI | Create Function, Create Update Policy |
| Schema Evolution via CLI | SKILL.md § Schema Evolution via CLI | Safe Schema Deployment Script, Export Current Schema |
| Monitoring Authoring Operations | SKILL.md § Monitoring Authoring Operations | |
| Must / Prefer / Avoid / Troubleshooting | SKILL.md § Must / Prefer / Avoid / Troubleshooting | MUST DO / AVOID / PREFER checklists |
| Agentic Workflows | SKILL.md § Agentic Workflows | Exploration Before Authoring, Script Generation Workflow |
| Examples | SKILL.md § Examples | |
| Agent Integration Notes | SKILL.md § Agent Integration Notes |
| 任务 | 参考文档 | 说明 |
|---|---|---|
| 在Fabric中查找工作区和项 | COMMON-CLI.md § Finding Workspaces and Items in Fabric | 必填 — 先阅读链接内容 [工作区/项ID解析所需] |
| Fabric拓扑与核心概念 | COMMON-CORE.md § Fabric Topology & Key Concepts | 层级结构、在Fabric中查找资源 |
| 环境URL | COMMON-CORE.md § Environment URLs | KQL集群URI、KQL摄入URI |
| 认证与令牌获取 | COMMON-CORE.md § Authentication & Token Acquisition | 受众错误会导致401;KQL受众: |
| 核心控制面REST API | COMMON-CORE.md § Core Control-Plane REST APIs | 列出工作区、列出项、创建项 |
| 分页 | COMMON-CORE.md § Pagination | |
| 长时间运行操作(LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| 速率限制与限流 | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake数据访问 | COMMON-CORE.md § OneLake Data Access | 需要 |
| 作业执行 | COMMON-CORE.md § Job Execution | |
| 容量管理 | COMMON-CORE.md § Capacity Management | |
| 常见问题与故障排除 | COMMON-CORE.md § Gotchas & Troubleshooting | |
| 最佳实践 | COMMON-CORE.md § Best Practices | |
| 工具选择依据 | COMMON-CLI.md § Tool Selection Rationale | |
| 认证方案 | COMMON-CLI.md § Authentication Recipes | |
通过 | COMMON-CLI.md § Fabric Control-Plane API via az rest | 必须传递 |
| 分页模式 | COMMON-CLI.md § Pagination Pattern | |
| 长时间运行操作(LRO)模式 | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
通过 | COMMON-CLI.md § OneLake Data Access via curl | 使用 |
| SQL / TDS数据面访问 | COMMON-CLI.md § SQL / TDS Data-Plane Access | |
| 作业执行(CLI) | COMMON-CLI.md § Job Execution | |
| OneLake快捷方式 | COMMON-CLI.md § OneLake Shortcuts | |
| 容量管理(CLI) | COMMON-CLI.md § Capacity Management | |
| 复合方案 | COMMON-CLI.md § Composite Recipes | |
| 常见问题与故障排除(CLI专属) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | |
快速参考: | COMMON-CLI.md § Quick Reference: az rest Template | |
| 快速参考:令牌受众 ↔ CLI工具矩阵 | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | 各服务对应的 |
| 创作能力矩阵 | EVENTHOUSE-AUTHORING-CORE.md § Authoring Capability Matrix | 先阅读 — KQL数据库与快捷方式(只读);连接需要Admin/Ingestor角色 |
| 表管理与Schema演进 | EVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema Evolution | 创建表、Create-Merge(幂等)、修改/重命名/删除、Schema演进(重命名、蓝绿部署) |
| 数据摄入与数据映射 | EVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data Mappings | 内联方式、Set-or-Append/Replace、从存储、流式传输、数据映射(CSV、JSON) |
| 策略 | EVENTHOUSE-AUTHORING-CORE.md § Policies | 保留、缓存、分区、合并 |
| 物化视图 | EVENTHOUSE-AUTHORING-CORE.md § Materialized Views | 创建、修改、生命周期、支持的聚合操作 |
| 存储函数与更新策略 | EVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update Policies | 存储函数、更新策略(摄入时自动转换) |
| 外部表 | EVENTHOUSE-AUTHORING-CORE.md § External Tables | OneLake / ADLS外部表、查询外部表 |
| 权限模型 | EVENTHOUSE-AUTHORING-CORE.md § Permission Model | 数据库角色、授予权限 |
| 创作常见问题与故障排除参考 | EVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference | 10个带原因与修复方案的问题 |
| Bash模板 | authoring-script-templates.md § Bash Templates | 创建表+摄入、Schema部署、导出Schema、设置保留/缓存 |
| PowerShell模板 | authoring-script-templates.md § PowerShell Templates | 创建表+摄入、Schema部署 |
| 工具栈 | SKILL.md § Tool Stack | |
| 连接 | SKILL.md § Connection | |
| 创作范围 | SKILL.md § Authoring Scope | |
| 执行KQL命令 | SKILL.md § Execute KQL Command | |
| 通过CLI进行表管理 | SKILL.md § Table Management via CLI | 创建表、添加列、删除表 |
| 通过CLI进行数据摄入 | SKILL.md § Data Ingestion via CLI | 内联方式、从存储、从OneLake、Set-or-Append |
| 通过CLI配置策略 | SKILL.md § Policies via CLI | 保留、缓存、流式摄入 |
| 通过CLI管理物化视图 | SKILL.md § Materialized Views via CLI | |
| 通过CLI管理函数与更新策略 | SKILL.md § Functions and Update Policies via CLI | 创建函数、创建更新策略 |
| 通过CLI进行Schema演进 | SKILL.md § Schema Evolution via CLI | 安全Schema部署脚本、导出当前Schema |
| 监控创作操作 | SKILL.md § Monitoring Authoring Operations | |
| 必须/推荐/避免/故障排除 | SKILL.md § Must / Prefer / Avoid / Troubleshooting | 必须执行/避免/推荐清单 |
| Agent工作流 | SKILL.md § Agentic Workflows | 创作前探索、脚本生成工作流 |
| 示例 | SKILL.md § Examples | |
| Agent集成说明 | SKILL.md § Agent Integration Notes |
Tool Stack
工具栈
| Tool | Purpose | Install |
|---|---|---|
| az cli | KQL management commands via Kusto REST API; Fabric control-plane discovery | |
| jq | JSON processing and output formatting | |
| 工具 | 用途 | 安装方式 |
|---|---|---|
| az cli | 通过Kusto REST API执行KQL管理命令;Fabric控制面发现 | |
| jq | JSON处理与输出格式化 | |
Connection
连接
Same as eventhouse-consumption-cli. Authoring requires elevated roles:
bash
undefined与eventhouse-consumption-cli的连接方式相同。创作操作需要更高权限角色:
bash
undefinedDiscover KQL Database query URI
Discover KQL Database query URI
WS_ID="<workspace-id>"
az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'
WS_ID="<workspace-id>"
az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'
Set connection variables
Set connection variables
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyDatabase"
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyDatabase"
Verify admin access
Verify admin access
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
---cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'
---Authoring Scope
创作范围
| Operation | Command Pattern |
|---|---|
| Create table | |
| Add column | |
| Drop table | |
| Ingest data | |
| Set retention | |
| Set caching | |
| Create function | |
| Create materialized view | |
| Create update policy | |
| Create data mapping | |
| 操作 | 命令模式 |
|---|---|
| 创建表 | |
| 添加列 | |
| 删除表 | |
| 摄入数据 | |
| 设置保留策略 | |
| 设置缓存策略 | |
| 创建函数 | |
| 创建物化视图 | |
| 创建更新策略 | |
| 创建数据映射 | |
Execute KQL Command
执行KQL命令
All KQL management commands in this skill follow the same pattern. After setting and , write the JSON body to and execute:
az restCLUSTER_URIDB/tmp/kql_body.jsonbash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"<KQL management command>"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'Nested JSON — For commands whose KQL contains embedded JSON (policies, mappings), use(single-quoted) to prevent shell expansion of backslash-escaped quotes, and replace<< 'EOF'with the literal database name.${DB}
PowerShell equivalent —then@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM. See PowerShell Templates.--body "@$env:TEMP\kql_body.json"
本技能中的所有KQL管理命令均遵循相同的模式。设置和后,将JSON请求体写入并执行:
az restCLUSTER_URIDB/tmp/kql_body.jsonbash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"<KQL management command>"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'嵌套JSON — 对于KQL中包含嵌入JSON的命令(如策略、映射),使用(单引号)防止shell转义反斜杠引号,并将<< 'EOF'替换为字面数据库名称。${DB}
PowerShell等效命令 —,然后使用@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM。详见PowerShell模板。--body "@$env:TEMP\kql_body.json"
Table Management via CLI
通过CLI进行表管理
Create Table (Idempotent)
创建表(幂等)
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Add Column
添加列
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter-merge table Events (Region: string)"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter-merge table Events (Region: string)"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Drop Table
删除表
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".drop table Events ifexists"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".drop table Events ifexists"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Data Ingestion via CLI
通过CLI进行数据摄入
Inline Ingestion (Testing)
内联摄入(测试用)
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Ingest from Storage
从存储摄入
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Ingest from OneLake
从OneLake摄入
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'abfss://workspace@onelake.dfs.fabric.microsoft.com/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'abfss://workspace@onelake.dfs.fabric.microsoft.com/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Set-or-Append from Query
从查询结果Set-or-Append
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Policies via CLI
通过CLI配置策略
Retention
保留策略
bash
undefinedbash
undefinedSet 365-day retention
Set 365-day retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)Caching (Hot Cache)
缓存(热缓存)
bash
undefinedbash
undefinedKeep last 30 days in hot cache
Keep last 30 days in hot cache
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy caching hot = 30d"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy caching hot = 30d"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)Streaming Ingestion
流式摄入
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy streamingingestion enable"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy streamingingestion enable"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Materialized Views via CLI
通过CLI管理物化视图
bash
undefinedbash
undefinedCreate materialized view with backfill
Create materialized view with backfill
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
```bashcat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)
```bashCheck health
Check health
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
---cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)
---Functions and Update Policies via CLI
通过CLI管理函数与更新策略
Create Function
创建函数
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Create Update Policy
创建更新策略
bash
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"}
EOFExecute— see Execute KQL Command/tmp/kql_body.json
bash
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"}
EOF执行— 见执行KQL命令/tmp/kql_body.json
Schema Evolution via CLI
通过CLI进行Schema演进
Safe Schema Deployment Script
安全Schema部署脚本
Save management commands in a file (one per line), then execute each command via :
.kqlaz restbash
undefined将管理命令保存到文件中(每行一个命令),然后通过逐个执行:
.kqlaz restbash
undefineddeploy_schema.kql contains one command per line:
deploy_schema.kql contains one command per line:
.create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic)
.create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic)
.create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string)
.create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string)
.alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'
.alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'
.alter table Events policy caching hot = 30d
.alter table Events policy caching hot = 30d
Execute each command from the file (see "Execute KQL Command" section)
Execute each command from the file (see "Execute KQL Command" section)
while IFS= read -r cmd; do
[[ "$cmd" =~ ^// ]] && continue # skip comment lines
[[ -z "$cmd" ]] && continue # skip blank lines
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"${cmd}"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows' done < deploy_schema.kql
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows' done < deploy_schema.kql
undefinedwhile IFS= read -r cmd; do
[[ "$cmd" =~ ^// ]] && continue # skip comment lines
[[ -z "$cmd" ]] && continue # skip blank lines
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"${cmd}"}
EOF
az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows' done < deploy_schema.kql
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows' done < deploy_schema.kql
undefinedExport Current Schema
导出当前Schema
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show database ${DB} schema as csl script"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq -r '.Tables[0].Rows[][0]' > current_schema.kqlbash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show database ${DB} schema as csl script"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq -r '.Tables[0].Rows[][0]' > current_schema.kqlMonitoring Authoring Operations
监控创作操作
kql
// Recent management commands
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration
| order by StartedOn desc
// Ingestion failures
.show ingestion failures
| where FailedOn > ago(24h)
| summarize FailureCount = count() by ErrorCode, Table
| order by FailureCount desc
// Materialized view health
.show materialized-views
| project Name, IsEnabled, IsHealthy, MaterializedTokql
// Recent management commands
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration
| order by StartedOn desc
// Ingestion failures
.show ingestion failures
| where FailedOn > ago(24h)
| summarize FailureCount = count() by ErrorCode, Table
| order by FailureCount desc
// Materialized view health
.show materialized-views
| project Name, IsEnabled, IsHealthy, MaterializedToMust / Prefer / Avoid / Troubleshooting
必须/推荐/避免/故障排除
Must
必须
- Clarify before acting on ambiguous prompts — if the request does not specify a target table, operation type, or schema (e.g. "set up my Eventhouse", "configure my database"), ask the user what they want to do. Never infer intent and apply management commands autonomously. Irreversible side-effects (policy changes, schema mutations, data ingestion) require explicit user intent.
- Use idempotent commands — ,
.create-merge table,.create-or-alter function..create table ifnotexists - Verify permissions before authoring — must have or
Adminrole.Ingestor - Test update policies by running the function independently before attaching.
- Include in storage URIs when ingesting from OneLake or Blob Storage.
impersonate
- 对模糊请求先明确需求 — 如果请求未指定目标表、操作类型或Schema(例如"设置我的Eventhouse"、"配置我的数据库"),询问用户具体要执行的操作。切勿自行推断意图并执行管理命令。不可逆的副作用(策略更改、Schema变更、数据摄入)需要用户明确意图。
- 使用幂等命令 — 、
.create-merge table、.create-or-alter function。.create table ifnotexists - 创作前验证权限 — 必须拥有或
Admin角色。Ingestor - 在附加更新策略前先独立测试函数。
- 从OneLake或Blob Storage摄入时,在存储URI中包含。
impersonate
Prefer
推荐
- with loop for deploying multi-command schema files.
az rest - Fabric KQL MCP server for agent-integrated ingestion and management workflows.
- over
.create-merge tablefor safe schema evolution..create table - Materialized views over repeated expensive aggregation queries.
- Script-based CI/CD — export schema with , store in git.
.show database DB schema as csl script
- 使用循环部署多命令Schema文件。
az rest - Fabric KQL MCP服务器用于集成Agent的摄入与管理工作流。
- ****优于
.create-merge table,用于安全的Schema演进。.create table - 物化视图优于重复执行的高成本聚合查询。
- 基于脚本的CI/CD — 使用导出Schema,存储在git中。
.show database DB schema as csl script
Avoid
避免
- without
.drop table— fails on missing tables.ifexists - to add columns — use
.alter tableinstead (additive only)..alter-merge table - Ingestion without mappings for CSV/JSON — column order or field names may not match.
- Hardcoded storage URIs — parameterise in scripts.
- Disabling materialized views without understanding the re-backfill cost.
- 不带的
ifexists— 表不存在时会执行失败。.drop table - 使用添加列 — 改用
.alter table(仅添加列)。.alter-merge table - CSV/JSON摄入时不使用映射 — 列顺序或字段名可能不匹配。
- 硬编码存储URI — 在脚本中使用参数化。
- 在不了解重新回填成本的情况下禁用物化视图。
Troubleshooting
故障排除
| Symptom | Fix |
|---|---|
| Use |
| Ingestion succeeds but table empty | Check data mappings: |
| Update policy not firing | Verify function runs standalone; check |
| Request |
| Materialized view stuck | Check |
| OneLake ingest auth error | Add |
| 症状 | 修复方案 |
|---|---|
| 使用 |
| 摄入成功但表为空 | 检查数据映射: |
| 更新策略未触发 | 验证函数可独立运行;检查 |
| 管理命令返回"Forbidden (403)" | 请求 |
| 物化视图卡住 | 检查 |
| OneLake摄入认证错误 | 在 |
Agentic Workflows
Agent工作流
Exploration Before Authoring
创作前探索
Always check for explicit intent before doing anything:
text
Step 0 → Is the request specific? Does it name a table, operation, and/or schema?
→ NO → Ask: "What would you like to set up? Options: create tables,
configure policies, set up ingestion mappings, create materialized views."
STOP — do not proceed until user specifies.
→ YES → Continue to Step 1.
Step 1 → .show tables details // what exists?
Step 2 → .show table <TABLE> schema as json // current columns
Step 3 → .show table <TABLE> policy retention // current policies
Step 4 → Plan changes (create-merge, alter, etc.)
Step 5 → Execute changes
Step 6 → Verify: .show table <TABLE> schema as json // confirm changes在执行任何操作前,务必确认用户的明确意图:
text
步骤0 → 请求是否明确?是否指定了表、操作和/或Schema?
→ 否 → 询问:"您想要设置什么?选项包括:创建表、
配置策略、设置摄入映射、创建物化视图。"
停止操作 — 直到用户明确需求后再继续。
→ 是 → 继续步骤1。
步骤1 → .show tables details // 查看已存在的资源
步骤2 → .show table <TABLE> schema as json // 当前列信息
步骤3 → .show table <TABLE> policy retention // 当前策略
步骤4 → 规划变更(create-merge、alter等)
步骤5 → 执行变更
步骤6 → 验证:.show table <TABLE> schema as json // 确认变更生效Script Generation Workflow
脚本生成工作流
text
Step 1 → Understand requirements from user
Step 2 → Generate KQL management commands
Step 3 → Save to .kql file
Step 4 → Deploy via az rest (one command at a time)
Step 5 → Verify deployed state matches intenttext
步骤1 → 理解用户需求
步骤2 → 生成KQL管理命令
步骤3 → 保存到.kql文件
步骤4 → 通过az rest部署(逐个执行命令)
步骤5 → 验证部署状态与意图一致Examples
示例
Example 1: Create Table with Policies and Mapping
示例1:创建带策略和映射的表
bash
undefinedbash
undefinedCreate table
Create table
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
```bashcat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)
```bashSet retention
Set retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table SensorData policy retention '{"SoftDeletePeriod":"90.00:00:00","Recoverability":"Enabled"}'"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
```bashcat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table SensorData policy retention '{"SoftDeletePeriod":"90.00:00:00","Recoverability":"Enabled"}'"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)
```bashSet caching
Set caching
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
```bashcat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)
```bashCreate JSON mapping
Create JSON mapping
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{"column":"Timestamp","path":"$.ts","datatype":"datetime"},{"column":"DeviceId","path":"$.deviceId","datatype":"string"},{"column":"Temperature","path":"$.temp","datatype":"real"},{"column":"Humidity","path":"$.humidity","datatype":"real"},{"column":"Location","path":"$.location","datatype":"dynamic"}]'"}
EOF
> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{"column":"Timestamp","path":"$.ts","datatype":"datetime"},{"column":"DeviceId","path":"$.deviceId","datatype":"string"},{"column":"Temperature","path":"$.temp","datatype":"real"},{"column":"Humidity","path":"$.humidity","datatype":"real"},{"column":"Location","path":"$.location","datatype":"dynamic"}]'"}
EOF
> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)Example 2: ETL with Update Policy
示例2:带更新策略的ETL
kql
// 1. Target table
.create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string)
// 2. Transform function
.create-or-alter function ParseRawLogs() {
RawLogs
| extend J = parse_json(RawMessage)
| project
Timestamp = todatetime(J.timestamp),
Level = tostring(J.level),
Message = tostring(J.message),
Source = tostring(J.source)
}
// 3. Attach update policy
.alter table ParsedLogs policy update
@'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'kql
// 1. Target table
.create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string)
// 2. Transform function
.create-or-alter function ParseRawLogs() {
RawLogs
| extend J = parse_json(RawMessage)
| project
Timestamp = todatetime(J.timestamp),
Level = tostring(J.level),
Message = tostring(J.message),
Source = tostring(J.source)
}
// 3. Attach update policy
.alter table ParsedLogs policy update
@'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'Agent Integration Notes
Agent集成说明
- This skill covers authoring operations — creating/altering database objects and ingesting data.
- For read-only queries and data exploration, delegate to eventhouse-consumption-cli.
- For cross-workload orchestration, delegate to the FabricDataEngineer agent.
- All management commands require elevated database roles (or
Admin).Ingestor
- 本技能涵盖创作操作 — 创建/修改数据库对象和摄入数据。
- 对于只读查询和数据探索,委托给eventhouse-consumption-cli。
- 对于跨工作负载编排,委托给FabricDataEngineer agent。
- 所有管理命令需要较高的数据库角色权限(或
Admin)。Ingestor