eventhouse-authoring-cli

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
  • GitHub Copilot CLI / VS Code: invoke the
    check-updates
    skill.
  • Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
  • Skip if the check was already performed earlier in this session.
CRITICAL NOTES
  1. To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
  2. To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
更新检查 — 每个会话一次(必填) 在会话中首次使用此技能时,请先运行check-updates技能再继续。
  • GitHub Copilot CLI / VS Code:调用
    check-updates
    技能。
  • Claude Code / Cowork / Cursor / Windsurf / Codex:比较本地与远程package.json版本。
  • 若本次会话中已执行过该检查,可跳过。
重要说明
  1. 从工作区名称查找工作区详情(包括其ID):列出所有工作区,然后使用JMESPath过滤
  2. 从工作区ID、项类型和项名称查找项详情(包括其ID):列出该工作区中该类型的所有项,然后使用JMESPath过滤

eventhouse-authoring-cli — Eventhouse Authoring and Management via CLI

eventhouse-authoring-cli — 通过CLI进行Eventhouse创作与管理

Table of Contents

目录

TaskReferenceNotes
Finding Workspaces and Items in FabricCOMMON-CLI.md § Finding Workspaces and Items in FabricMandatoryREAD link first [needed for workspace/item ID resolution]
Fabric Topology & Key ConceptsCOMMON-CORE.md § Fabric Topology & Key ConceptsHierarchy, Finding Things in Fabric
Environment URLsCOMMON-CORE.md § Environment URLsKQL Cluster URI, KQL Ingestion URI
Authentication & Token AcquisitionCOMMON-CORE.md § Authentication & Token AcquisitionWrong audience = 401; KQL audience:
kusto.kusto.windows.net
Core Control-Plane REST APIsCOMMON-CORE.md § Core Control-Plane REST APIsList Workspaces, List Items, Item Creation
PaginationCOMMON-CORE.md § Pagination
Long-Running Operations (LRO)COMMON-CORE.md § Long-Running Operations (LRO)
Rate Limiting & ThrottlingCOMMON-CORE.md § Rate Limiting & Throttling
OneLake Data AccessCOMMON-CORE.md § OneLake Data AccessRequires
storage.azure.com
token, not Fabric token
Job ExecutionCOMMON-CORE.md § Job Execution
Capacity ManagementCOMMON-CORE.md § Capacity Management
Gotchas & TroubleshootingCOMMON-CORE.md § Gotchas & Troubleshooting
Best PracticesCOMMON-CORE.md § Best Practices
Tool Selection RationaleCOMMON-CLI.md § Tool Selection Rationale
Authentication RecipesCOMMON-CLI.md § Authentication Recipes
az login
flows and token acquisition
Fabric Control-Plane API via
az rest
COMMON-CLI.md § Fabric Control-Plane API via az restAlways pass
--resource https://api.fabric.microsoft.com
or
az rest
fails
Pagination PatternCOMMON-CLI.md § Pagination Pattern
Long-Running Operations (LRO) PatternCOMMON-CLI.md § Long-Running Operations (LRO) Pattern
OneLake Data Access via
curl
COMMON-CLI.md § OneLake Data Access via curlUse
curl
not
az rest
(different token audience)
SQL / TDS Data-Plane AccessCOMMON-CLI.md § SQL / TDS Data-Plane Access
sqlcmd
(Go) — not for KQL, but useful for cross-workload
Job Execution (CLI)COMMON-CLI.md § Job Execution
OneLake ShortcutsCOMMON-CLI.md § OneLake Shortcuts
Capacity Management (CLI)COMMON-CLI.md § Capacity Management
Composite RecipesCOMMON-CLI.md § Composite Recipes
Gotchas & Troubleshooting (CLI-Specific)COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific)
az rest
audience, shell escaping, token expiry
Quick Reference:
az rest
Template
COMMON-CLI.md § Quick Reference: az rest Template
Quick Reference: Token Audience / CLI Tool MatrixCOMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool MatrixWhich
--resource
+ tool for each service
Authoring Capability MatrixEVENTHOUSE-AUTHORING-CORE.md § Authoring Capability MatrixRead first — KQL Database vs Shortcut (read-only); connection requires Admin/Ingestor role
Table Management and Schema EvolutionEVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema EvolutionCreate Table, Create-Merge (idempotent), Alter / Rename / Drop, Schema Evolution (Rename, Swap/Blue-Green)
Ingestion and Data MappingsEVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data MappingsInline, Set-or-Append/Replace, From Storage, Streaming, Data Mappings (CSV, JSON)
PoliciesEVENTHOUSE-AUTHORING-CORE.md § PoliciesRetention, Caching, Partitioning, Merge
Materialized ViewsEVENTHOUSE-AUTHORING-CORE.md § Materialized ViewsCreate, Alter, Lifecycle, Supported aggregations
Stored Functions and Update PoliciesEVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update PoliciesStored Functions, Update Policies (auto-transform on ingestion)
External TablesEVENTHOUSE-AUTHORING-CORE.md § External TablesOneLake / ADLS External Table, Query External Table
Permission ModelEVENTHOUSE-AUTHORING-CORE.md § Permission ModelDatabase Roles, Grant Permissions
Authoring Gotchas and TroubleshootingEVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference10 numbered issues with cause + fix
Bash Templatesauthoring-script-templates.md § Bash TemplatesCreate Table + Ingest, Schema Deployment, Export Schema, Set Retention/Caching
PowerShell Templatesauthoring-script-templates.md § PowerShell TemplatesCreate Table + Ingest, Schema Deployment
Tool StackSKILL.md § Tool Stack
ConnectionSKILL.md § Connection
Authoring ScopeSKILL.md § Authoring Scope
Execute KQL CommandSKILL.md § Execute KQL Command
az rest
pattern
— write JSON body, then execute
Table Management via CLISKILL.md § Table Management via CLICreate Table, Add Column, Drop Table
Data Ingestion via CLISKILL.md § Data Ingestion via CLIInline, From Storage, From OneLake, Set-or-Append
Policies via CLISKILL.md § Policies via CLIRetention, Caching, Streaming Ingestion
Materialized Views via CLISKILL.md § Materialized Views via CLI
Functions and Update Policies via CLISKILL.md § Functions and Update Policies via CLICreate Function, Create Update Policy
Schema Evolution via CLISKILL.md § Schema Evolution via CLISafe Schema Deployment Script, Export Current Schema
Monitoring Authoring OperationsSKILL.md § Monitoring Authoring Operations
Must / Prefer / Avoid / TroubleshootingSKILL.md § Must / Prefer / Avoid / TroubleshootingMUST DO / AVOID / PREFER checklists
Agentic WorkflowsSKILL.md § Agentic WorkflowsExploration Before Authoring, Script Generation Workflow
ExamplesSKILL.md § Examples
Agent Integration NotesSKILL.md § Agent Integration Notes

任务参考文档说明
在Fabric中查找工作区和项COMMON-CLI.md § Finding Workspaces and Items in Fabric必填先阅读链接内容 [工作区/项ID解析所需]
Fabric拓扑与核心概念COMMON-CORE.md § Fabric Topology & Key Concepts层级结构、在Fabric中查找资源
环境URLCOMMON-CORE.md § Environment URLsKQL集群URI、KQL摄入URI
认证与令牌获取COMMON-CORE.md § Authentication & Token Acquisition受众错误会导致401;KQL受众:
kusto.kusto.windows.net
核心控制面REST APICOMMON-CORE.md § Core Control-Plane REST APIs列出工作区、列出项、创建项
分页COMMON-CORE.md § Pagination
长时间运行操作(LRO)COMMON-CORE.md § Long-Running Operations (LRO)
速率限制与限流COMMON-CORE.md § Rate Limiting & Throttling
OneLake数据访问COMMON-CORE.md § OneLake Data Access需要
storage.azure.com
令牌,而非Fabric令牌
作业执行COMMON-CORE.md § Job Execution
容量管理COMMON-CORE.md § Capacity Management
常见问题与故障排除COMMON-CORE.md § Gotchas & Troubleshooting
最佳实践COMMON-CORE.md § Best Practices
工具选择依据COMMON-CLI.md § Tool Selection Rationale
认证方案COMMON-CLI.md § Authentication Recipes
az login
流程与令牌获取
通过
az rest
调用Fabric控制面API
COMMON-CLI.md § Fabric Control-Plane API via az rest必须传递
--resource https://api.fabric.microsoft.com
,否则
az rest
会失败
分页模式COMMON-CLI.md § Pagination Pattern
长时间运行操作(LRO)模式COMMON-CLI.md § Long-Running Operations (LRO) Pattern
通过
curl
访问OneLake数据
COMMON-CLI.md § OneLake Data Access via curl使用
curl
而非
az rest
(令牌受众不同)
SQL / TDS数据面访问COMMON-CLI.md § SQL / TDS Data-Plane Access
sqlcmd
(Go版)——不适用于KQL,但跨工作负载场景有用
作业执行(CLI)COMMON-CLI.md § Job Execution
OneLake快捷方式COMMON-CLI.md § OneLake Shortcuts
容量管理(CLI)COMMON-CLI.md § Capacity Management
复合方案COMMON-CLI.md § Composite Recipes
常见问题与故障排除(CLI专属)COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific)
az rest
受众、shell转义、令牌过期
快速参考:
az rest
模板
COMMON-CLI.md § Quick Reference: az rest Template
快速参考:令牌受众 ↔ CLI工具矩阵COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix各服务对应的
--resource
与工具
创作能力矩阵EVENTHOUSE-AUTHORING-CORE.md § Authoring Capability Matrix先阅读 — KQL数据库与快捷方式(只读);连接需要Admin/Ingestor角色
表管理与Schema演进EVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema Evolution创建表、Create-Merge(幂等)、修改/重命名/删除、Schema演进(重命名、蓝绿部署)
数据摄入与数据映射EVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data Mappings内联方式、Set-or-Append/Replace、从存储、流式传输、数据映射(CSV、JSON)
策略EVENTHOUSE-AUTHORING-CORE.md § Policies保留、缓存、分区、合并
物化视图EVENTHOUSE-AUTHORING-CORE.md § Materialized Views创建、修改、生命周期、支持的聚合操作
存储函数与更新策略EVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update Policies存储函数、更新策略(摄入时自动转换)
外部表EVENTHOUSE-AUTHORING-CORE.md § External TablesOneLake / ADLS外部表、查询外部表
权限模型EVENTHOUSE-AUTHORING-CORE.md § Permission Model数据库角色、授予权限
创作常见问题与故障排除参考EVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference10个带原因与修复方案的问题
Bash模板authoring-script-templates.md § Bash Templates创建表+摄入、Schema部署、导出Schema、设置保留/缓存
PowerShell模板authoring-script-templates.md § PowerShell Templates创建表+摄入、Schema部署
工具栈SKILL.md § Tool Stack
连接SKILL.md § Connection
创作范围SKILL.md § Authoring Scope
执行KQL命令SKILL.md § Execute KQL Command
az rest
模式
— 编写JSON请求体,然后执行
通过CLI进行表管理SKILL.md § Table Management via CLI创建表、添加列、删除表
通过CLI进行数据摄入SKILL.md § Data Ingestion via CLI内联方式、从存储、从OneLake、Set-or-Append
通过CLI配置策略SKILL.md § Policies via CLI保留、缓存、流式摄入
通过CLI管理物化视图SKILL.md § Materialized Views via CLI
通过CLI管理函数与更新策略SKILL.md § Functions and Update Policies via CLI创建函数、创建更新策略
通过CLI进行Schema演进SKILL.md § Schema Evolution via CLI安全Schema部署脚本、导出当前Schema
监控创作操作SKILL.md § Monitoring Authoring Operations
必须/推荐/避免/故障排除SKILL.md § Must / Prefer / Avoid / Troubleshooting必须执行/避免/推荐清单
Agent工作流SKILL.md § Agentic Workflows创作前探索、脚本生成工作流
示例SKILL.md § Examples
Agent集成说明SKILL.md § Agent Integration Notes

Tool Stack

工具栈

ToolPurposeInstall
az cliKQL management commands via Kusto REST API; Fabric control-plane discovery
winget install Microsoft.AzureCLI
jqJSON processing and output formatting
winget install jqlang.jq

工具用途安装方式
az cli通过Kusto REST API执行KQL管理命令;Fabric控制面发现
winget install Microsoft.AzureCLI
jqJSON处理与输出格式化
winget install jqlang.jq

Connection

连接

Same as eventhouse-consumption-cli. Authoring requires elevated roles:
bash
undefined
eventhouse-consumption-cli的连接方式相同。创作操作需要更高权限角色:
bash
undefined

Discover KQL Database query URI

Discover KQL Database query URI

WS_ID="<workspace-id>" az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'
WS_ID="<workspace-id>" az rest --method GET
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases"
--resource "https://api.fabric.microsoft.com"
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'

Set connection variables

Set connection variables

CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com" DB_NAME="MyDatabase"
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com" DB_NAME="MyDatabase"

Verify admin access

Verify admin access

cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

---
cat > /tmp/kql_body.json << EOF {"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows'

---

Authoring Scope

创作范围

OperationCommand Pattern
Create table
.create-merge table T (cols)
Add column
.alter-merge table T (NewCol: type)
Drop table
.drop table T ifexists
Ingest data
.ingest into table T (...)
Set retention
.alter table T policy retention ...
Set caching
.alter table T policy caching hot = Nd
Create function
.create-or-alter function F() { ... }
Create materialized view
.create materialized-view MV on table T { ... }
Create update policy
.alter table T policy update ...
Create data mapping
.create table T ingestion csv mapping ...

操作命令模式
创建表
.create-merge table T (cols)
添加列
.alter-merge table T (NewCol: type)
删除表
.drop table T ifexists
摄入数据
.ingest into table T (...)
设置保留策略
.alter table T policy retention ...
设置缓存策略
.alter table T policy caching hot = Nd
创建函数
.create-or-alter function F() { ... }
创建物化视图
.create materialized-view MV on table T { ... }
创建更新策略
.alter table T policy update ...
创建数据映射
.create table T ingestion csv mapping ...

Execute KQL Command

执行KQL命令

All KQL management commands in this skill follow the same
az rest
pattern. After setting
CLUSTER_URI
and
DB
, write the JSON body to
/tmp/kql_body.json
and execute:
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"<KQL management command>"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/mgmt" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'
Nested JSON — For commands whose KQL contains embedded JSON (policies, mappings), use
<< 'EOF'
(single-quoted) to prevent shell expansion of backslash-escaped quotes, and replace
${DB}
with the literal database name.
PowerShell equivalent
@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM
then
--body "@$env:TEMP\kql_body.json"
. See PowerShell Templates.

本技能中的所有KQL管理命令均遵循相同的
az rest
模式。设置
CLUSTER_URI
DB
后,将JSON请求体写入
/tmp/kql_body.json
并执行:
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"<KQL management command>"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/mgmt" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq '.Tables[0].Rows'
嵌套JSON — 对于KQL中包含嵌入JSON的命令(如策略、映射),使用
<< 'EOF'
(单引号)防止shell转义反斜杠引号,并将
${DB}
替换为字面数据库名称。
PowerShell等效命令
@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOM
,然后使用
--body "@$env:TEMP\kql_body.json"
。详见PowerShell模板

Table Management via CLI

通过CLI进行表管理

Create Table (Idempotent)

创建表(幂等)

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Add Column

添加列

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter-merge table Events (Region: string)"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter-merge table Events (Region: string)"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Drop Table

删除表

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".drop table Events ifexists"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".drop table Events ifexists"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Data Ingestion via CLI

通过CLI进行数据摄入

Inline Ingestion (Testing)

内联摄入(测试用)

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Ingest from Storage

从存储摄入

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Ingest from OneLake

从OneLake摄入

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'abfss://workspace@onelake.dfs.fabric.microsoft.com/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'abfss://workspace@onelake.dfs.fabric.microsoft.com/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Set-or-Append from Query

从查询结果Set-or-Append

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Policies via CLI

通过CLI配置策略

Retention

保留策略

bash
undefined
bash
undefined

Set 365-day retention

Set 365-day retention

cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

Caching (Hot Cache)

缓存(热缓存)

bash
undefined
bash
undefined

Keep last 30 days in hot cache

Keep last 30 days in hot cache

cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter table Events policy caching hot = 30d"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter table Events policy caching hot = 30d"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

Streaming Ingestion

流式摄入

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy streamingingestion enable"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy streamingingestion enable"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Materialized Views via CLI

通过CLI管理物化视图

bash
undefined
bash
undefined

Create materialized view with backfill

Create materialized view with backfill

cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)

```bash
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

```bash

Check health

Check health

cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)

---
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

---

Functions and Update Policies via CLI

通过CLI管理函数与更新策略

Create Function

创建函数

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command
bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Create Update Policy

创建更新策略

bash
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"}
EOF
Execute
/tmp/kql_body.json
— see Execute KQL Command

bash
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"}
EOF
执行
/tmp/kql_body.json
— 见执行KQL命令

Schema Evolution via CLI

通过CLI进行Schema演进

Safe Schema Deployment Script

安全Schema部署脚本

Save management commands in a
.kql
file (one per line), then execute each command via
az rest
:
bash
undefined
将管理命令保存到
.kql
文件中(每行一个命令),然后通过
az rest
逐个执行:
bash
undefined

deploy_schema.kql contains one command per line:

deploy_schema.kql contains one command per line:

.create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic)

.create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic)

.create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string)

.create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string)

.alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'

.alter table Events policy retention '{"SoftDeletePeriod":"365.00:00:00","Recoverability":"Enabled"}'

.alter table Events policy caching hot = 30d

.alter table Events policy caching hot = 30d

Execute each command from the file (see "Execute KQL Command" section)

Execute each command from the file (see "Execute KQL Command" section)

while IFS= read -r cmd; do [[ "$cmd" =~ ^// ]] && continue # skip comment lines [[ -z "$cmd" ]] && continue # skip blank lines cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":"${cmd}"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows' done < deploy_schema.kql
undefined
while IFS= read -r cmd; do [[ "$cmd" =~ ^// ]] && continue # skip comment lines [[ -z "$cmd" ]] && continue # skip blank lines cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":"${cmd}"} EOF az rest --method POST
--url "${CLUSTER_URI}/v1/rest/mgmt"
--resource "https://kusto.kusto.windows.net"
--headers "Content-Type=application/json"
--body @/tmp/kql_body.json
| jq '.Tables[0].Rows' done < deploy_schema.kql
undefined

Export Current Schema

导出当前Schema

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show database ${DB} schema as csl script"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/mgmt" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq -r '.Tables[0].Rows[][0]' > current_schema.kql

bash
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show database ${DB} schema as csl script"}
EOF
az rest --method POST \
  --url "${CLUSTER_URI}/v1/rest/mgmt" \
  --resource "https://kusto.kusto.windows.net" \
  --headers "Content-Type=application/json" \
  --body @/tmp/kql_body.json \
  | jq -r '.Tables[0].Rows[][0]' > current_schema.kql

Monitoring Authoring Operations

监控创作操作

kql
// Recent management commands
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration
| order by StartedOn desc

// Ingestion failures
.show ingestion failures
| where FailedOn > ago(24h)
| summarize FailureCount = count() by ErrorCode, Table
| order by FailureCount desc

// Materialized view health
.show materialized-views
| project Name, IsEnabled, IsHealthy, MaterializedTo

kql
// Recent management commands
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration
| order by StartedOn desc

// Ingestion failures
.show ingestion failures
| where FailedOn > ago(24h)
| summarize FailureCount = count() by ErrorCode, Table
| order by FailureCount desc

// Materialized view health
.show materialized-views
| project Name, IsEnabled, IsHealthy, MaterializedTo

Must / Prefer / Avoid / Troubleshooting

必须/推荐/避免/故障排除

Must

必须

  • Clarify before acting on ambiguous prompts — if the request does not specify a target table, operation type, or schema (e.g. "set up my Eventhouse", "configure my database"), ask the user what they want to do. Never infer intent and apply management commands autonomously. Irreversible side-effects (policy changes, schema mutations, data ingestion) require explicit user intent.
  • Use idempotent commands
    .create-merge table
    ,
    .create-or-alter function
    ,
    .create table ifnotexists
    .
  • Verify permissions before authoring — must have
    Admin
    or
    Ingestor
    role.
  • Test update policies by running the function independently before attaching.
  • Include
    impersonate
    in storage URIs when ingesting from OneLake or Blob Storage.
  • 对模糊请求先明确需求 — 如果请求未指定目标表、操作类型或Schema(例如"设置我的Eventhouse"、"配置我的数据库"),询问用户具体要执行的操作。切勿自行推断意图并执行管理命令。不可逆的副作用(策略更改、Schema变更、数据摄入)需要用户明确意图。
  • 使用幂等命令
    .create-merge table
    .create-or-alter function
    .create table ifnotexists
  • 创作前验证权限 — 必须拥有
    Admin
    Ingestor
    角色。
  • 在附加更新策略前先独立测试函数
  • 从OneLake或Blob Storage摄入时,在存储URI中包含
    impersonate

Prefer

推荐

  • az rest
    with loop
    for deploying multi-command schema files.
  • Fabric KQL MCP server for agent-integrated ingestion and management workflows.
  • .create-merge table
    over
    .create table
    for safe schema evolution.
  • Materialized views over repeated expensive aggregation queries.
  • Script-based CI/CD — export schema with
    .show database DB schema as csl script
    , store in git.
  • 使用
    az rest
    循环部署多命令Schema文件
  • Fabric KQL MCP服务器用于集成Agent的摄入与管理工作流。
  • **
    .create-merge table
    **优于
    .create table
    ,用于安全的Schema演进。
  • 物化视图优于重复执行的高成本聚合查询。
  • 基于脚本的CI/CD — 使用
    .show database DB schema as csl script
    导出Schema,存储在git中。

Avoid

避免

  • .drop table
    without
    ifexists
    — fails on missing tables.
  • .alter table
    to add columns — use
    .alter-merge table
    instead (additive only).
  • Ingestion without mappings for CSV/JSON — column order or field names may not match.
  • Hardcoded storage URIs — parameterise in scripts.
  • Disabling materialized views without understanding the re-backfill cost.
  • 不带
    ifexists
    .drop table
    — 表不存在时会执行失败。
  • 使用
    .alter table
    添加列
    — 改用
    .alter-merge table
    (仅添加列)。
  • CSV/JSON摄入时不使用映射 — 列顺序或字段名可能不匹配。
  • 硬编码存储URI — 在脚本中使用参数化。
  • 在不了解重新回填成本的情况下禁用物化视图

Troubleshooting

故障排除

SymptomFix
.create table
fails "already exists"
Use
.create-merge table
or
.create table ifnotexists
Ingestion succeeds but table emptyCheck data mappings:
.show table T ingestion csv mappings
Update policy not firingVerify function runs standalone; check
.show table T policy update
Forbidden (403)
on management commands
Request
admin
or
ingestor
database role
Materialized view stuckCheck
.show materialized-view MV statistics
; may need
.disable
/
.enable
OneLake ingest auth errorAdd
;impersonate
to
abfss://
URI

症状修复方案
.create table
失败提示"已存在"
使用
.create-merge table
.create table ifnotexists
摄入成功但表为空检查数据映射:
.show table T ingestion csv mappings
更新策略未触发验证函数可独立运行;检查
.show table T policy update
管理命令返回"Forbidden (403)"请求
admin
ingestor
数据库角色
物化视图卡住检查
.show materialized-view MV statistics
;可能需要
.disable
/
.enable
OneLake摄入认证错误
abfss://
URI中添加
;impersonate

Agentic Workflows

Agent工作流

Exploration Before Authoring

创作前探索

Always check for explicit intent before doing anything:
text
Step 0 → Is the request specific? Does it name a table, operation, and/or schema?
         → NO  → Ask: "What would you like to set up? Options: create tables,
                  configure policies, set up ingestion mappings, create materialized views."
                  STOP — do not proceed until user specifies.
         → YES → Continue to Step 1.
Step 1 → .show tables details                        // what exists?
Step 2 → .show table <TABLE> schema as json          // current columns
Step 3 → .show table <TABLE> policy retention        // current policies
Step 4 → Plan changes (create-merge, alter, etc.)
Step 5 → Execute changes
Step 6 → Verify: .show table <TABLE> schema as json  // confirm changes
在执行任何操作前,务必确认用户的明确意图:
text
步骤0 → 请求是否明确?是否指定了表、操作和/或Schema?
         → 否 → 询问:"您想要设置什么?选项包括:创建表、
                  配置策略、设置摄入映射、创建物化视图。"
                  停止操作 — 直到用户明确需求后再继续。
         → 是 → 继续步骤1。
步骤1 → .show tables details                        // 查看已存在的资源
步骤2 → .show table <TABLE> schema as json          // 当前列信息
步骤3 → .show table <TABLE> policy retention        // 当前策略
步骤4 → 规划变更(create-merge、alter等)
步骤5 → 执行变更
步骤6 → 验证:.show table <TABLE> schema as json  // 确认变更生效

Script Generation Workflow

脚本生成工作流

text
Step 1 → Understand requirements from user
Step 2 → Generate KQL management commands
Step 3 → Save to .kql file
Step 4 → Deploy via az rest (one command at a time)
Step 5 → Verify deployed state matches intent

text
步骤1 → 理解用户需求
步骤2 → 生成KQL管理命令
步骤3 → 保存到.kql文件
步骤4 → 通过az rest部署(逐个执行命令)
步骤5 → 验证部署状态与意图一致

Examples

示例

Example 1: Create Table with Policies and Mapping

示例1:创建带策略和映射的表

bash
undefined
bash
undefined

Create table

Create table

cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)

```bash
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

```bash

Set retention

Set retention

cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".alter table SensorData policy retention '{"SoftDeletePeriod":"90.00:00:00","Recoverability":"Enabled"}'"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)

```bash
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".alter table SensorData policy retention '{"SoftDeletePeriod":"90.00:00:00","Recoverability":"Enabled"}'"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

```bash

Set caching

Set caching

cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)

```bash
cat > /tmp/kql_body.json << EOF {"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

```bash

Create JSON mapping

Create JSON mapping

cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{"column":"Timestamp","path":"$.ts","datatype":"datetime"},{"column":"DeviceId","path":"$.deviceId","datatype":"string"},{"column":"Temperature","path":"$.temp","datatype":"real"},{"column":"Humidity","path":"$.humidity","datatype":"real"},{"column":"Location","path":"$.location","datatype":"dynamic"}]'"} EOF

> Execute `/tmp/kql_body.json` — see [Execute KQL Command](#execute-kql-command)
cat > /tmp/kql_body.json << 'EOF' {"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{"column":"Timestamp","path":"$.ts","datatype":"datetime"},{"column":"DeviceId","path":"$.deviceId","datatype":"string"},{"column":"Temperature","path":"$.temp","datatype":"real"},{"column":"Humidity","path":"$.humidity","datatype":"real"},{"column":"Location","path":"$.location","datatype":"dynamic"}]'"} EOF

> 执行`/tmp/kql_body.json` — 见[执行KQL命令](#execute-kql-command)

Example 2: ETL with Update Policy

示例2:带更新策略的ETL

kql
// 1. Target table
.create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string)

// 2. Transform function
.create-or-alter function ParseRawLogs() {
    RawLogs
    | extend J = parse_json(RawMessage)
    | project
        Timestamp = todatetime(J.timestamp),
        Level = tostring(J.level),
        Message = tostring(J.message),
        Source = tostring(J.source)
}

// 3. Attach update policy
.alter table ParsedLogs policy update
@'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'

kql
// 1. Target table
.create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string)

// 2. Transform function
.create-or-alter function ParseRawLogs() {
    RawLogs
    | extend J = parse_json(RawMessage)
    | project
        Timestamp = todatetime(J.timestamp),
        Level = tostring(J.level),
        Message = tostring(J.message),
        Source = tostring(J.source)
}

// 3. Attach update policy
.alter table ParsedLogs policy update
@'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'

Agent Integration Notes

Agent集成说明

  • This skill covers authoring operations — creating/altering database objects and ingesting data.
  • For read-only queries and data exploration, delegate to eventhouse-consumption-cli.
  • For cross-workload orchestration, delegate to the FabricDataEngineer agent.
  • All management commands require elevated database roles (
    Admin
    or
    Ingestor
    ).
  • 本技能涵盖创作操作 — 创建/修改数据库对象和摄入数据。
  • 对于只读查询和数据探索,委托给eventhouse-consumption-cli
  • 对于跨工作负载编排,委托给FabricDataEngineer agent。
  • 所有管理命令需要较高的数据库角色权限(
    Admin
    Ingestor
    )。