azure-ai-contentsafety-ts

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Azure AI Content Safety REST SDK for TypeScript

适用于TypeScript的Azure AI Content Safety REST SDK

Analyze text and images for harmful content with customizable blocklists.
通过可自定义的阻止列表分析文本和图片中的有害内容。

Installation

安装

bash
npm install @azure-rest/ai-content-safety @azure/identity @azure/core-auth
bash
npm install @azure-rest/ai-content-safety @azure/identity @azure/core-auth

Environment Variables

环境变量

bash
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
CONTENT_SAFETY_KEY=<api-key>
bash
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
CONTENT_SAFETY_KEY=<api-key>

Authentication

身份验证

Important: This is a REST client.
ContentSafetyClient
is a function, not a class.
重要提示:这是一个REST客户端。
ContentSafetyClient
是一个函数,而非类。

API Key

API密钥

typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { AzureKeyCredential } from "@azure/core-auth";

const client = ContentSafetyClient(
  process.env.CONTENT_SAFETY_ENDPOINT!,
  new AzureKeyCredential(process.env.CONTENT_SAFETY_KEY!)
);
typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { AzureKeyCredential } from "@azure/core-auth";

const client = ContentSafetyClient(
  process.env.CONTENT_SAFETY_ENDPOINT!,
  new AzureKeyCredential(process.env.CONTENT_SAFETY_KEY!)
);

DefaultAzureCredential

DefaultAzureCredential

typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { DefaultAzureCredential } from "@azure/identity";

const client = ContentSafetyClient(
  process.env.CONTENT_SAFETY_ENDPOINT!,
  new DefaultAzureCredential()
);
typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { DefaultAzureCredential } from "@azure/identity";

const client = ContentSafetyClient(
  process.env.CONTENT_SAFETY_ENDPOINT!,
  new DefaultAzureCredential()
);

Analyze Text

分析文本

typescript
import ContentSafetyClient, { isUnexpected } from "@azure-rest/ai-content-safety";

const result = await client.path("/text:analyze").post({
  body: {
    text: "Text content to analyze",
    categories: ["Hate", "Sexual", "Violence", "SelfHarm"],
    outputType: "FourSeverityLevels"  // or "EightSeverityLevels"
  }
});

if (isUnexpected(result)) {
  throw result.body;
}

for (const analysis of result.body.categoriesAnalysis) {
  console.log(`${analysis.category}: severity ${analysis.severity}`);
}
typescript
import ContentSafetyClient, { isUnexpected } from "@azure-rest/ai-content-safety";

const result = await client.path("/text:analyze").post({
  body: {
    text: "要分析的文本内容",
    categories: ["Hate", "Sexual", "Violence", "SelfHarm"],
    outputType: "FourSeverityLevels"  // 或 "EightSeverityLevels"
  }
});

if (isUnexpected(result)) {
  throw result.body;
}

for (const analysis of result.body.categoriesAnalysis) {
  console.log(`${analysis.category}: 严重程度 ${analysis.severity}`);
}

Analyze Image

分析图片

Base64 Content

Base64内容

typescript
import { readFileSync } from "node:fs";

const imageBuffer = readFileSync("./image.png");
const base64Image = imageBuffer.toString("base64");

const result = await client.path("/image:analyze").post({
  body: {
    image: { content: base64Image }
  }
});

if (isUnexpected(result)) {
  throw result.body;
}

for (const analysis of result.body.categoriesAnalysis) {
  console.log(`${analysis.category}: severity ${analysis.severity}`);
}
typescript
import { readFileSync } from "node:fs";

const imageBuffer = readFileSync("./image.png");
const base64Image = imageBuffer.toString("base64");

const result = await client.path("/image:analyze").post({
  body: {
    image: { content: base64Image }
  }
});

if (isUnexpected(result)) {
  throw result.body;
}

for (const analysis of result.body.categoriesAnalysis) {
  console.log(`${analysis.category}: 严重程度 ${analysis.severity}`);
}

Blob URL

Blob URL

typescript
const result = await client.path("/image:analyze").post({
  body: {
    image: { blobUrl: "https://storage.blob.core.windows.net/container/image.png" }
  }
});
typescript
const result = await client.path("/image:analyze").post({
  body: {
    image: { blobUrl: "https://storage.blob.core.windows.net/container/image.png" }
  }
});

Blocklist Management

阻止列表管理

Create Blocklist

创建阻止列表

typescript
const result = await client
  .path("/text/blocklists/{blocklistName}", "my-blocklist")
  .patch({
    contentType: "application/merge-patch+json",
    body: {
      description: "Custom blocklist for prohibited terms"
    }
  });

if (isUnexpected(result)) {
  throw result.body;
}

console.log(`Created: ${result.body.blocklistName}`);
typescript
const result = await client
  .path("/text/blocklists/{blocklistName}", "my-blocklist")
  .patch({
    contentType: "application/merge-patch+json",
    body: {
      description: "用于禁用术语的自定义阻止列表"
    }
  });

if (isUnexpected(result)) {
  throw result.body;
}

console.log(`已创建: ${result.body.blocklistName}`);

Add Items to Blocklist

向阻止列表添加条目

typescript
const result = await client
  .path("/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems", "my-blocklist")
  .post({
    body: {
      blocklistItems: [
        { text: "prohibited-term-1", description: "First blocked term" },
        { text: "prohibited-term-2", description: "Second blocked term" }
      ]
    }
  });

if (isUnexpected(result)) {
  throw result.body;
}

for (const item of result.body.blocklistItems ?? []) {
  console.log(`Added: ${item.blocklistItemId}`);
}
typescript
const result = await client
  .path("/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems", "my-blocklist")
  .post({
    body: {
      blocklistItems: [
        { text: "prohibited-term-1", description: "第一个被阻止的术语" },
        { text: "prohibited-term-2", description: "第二个被阻止的术语" }
      ]
    }
  });

if (isUnexpected(result)) {
  throw result.body;
}

for (const item of result.body.blocklistItems ?? []) {
  console.log(`已添加: ${item.blocklistItemId}`);
}

Analyze with Blocklist

结合阻止列表进行分析

typescript
const result = await client.path("/text:analyze").post({
  body: {
    text: "Text that might contain blocked terms",
    blocklistNames: ["my-blocklist"],
    haltOnBlocklistHit: false
  }
});

if (isUnexpected(result)) {
  throw result.body;
}

// Check blocklist matches
if (result.body.blocklistsMatch) {
  for (const match of result.body.blocklistsMatch) {
    console.log(`Blocked: "${match.blocklistItemText}" from ${match.blocklistName}`);
  }
}
typescript
const result = await client.path("/text:analyze").post({
  body: {
    text: "可能包含被阻止术语的文本",
    blocklistNames: ["my-blocklist"],
    haltOnBlocklistHit: false
  }
});

if (isUnexpected(result)) {
  throw result.body;
}

// 检查阻止列表匹配项
if (result.body.blocklistsMatch) {
  for (const match of result.body.blocklistsMatch) {
    console.log(`已阻止: "${match.blocklistItemText}" 来自 ${match.blocklistName}`);
  }
}

List Blocklists

列出阻止列表

typescript
const result = await client.path("/text/blocklists").get();

if (isUnexpected(result)) {
  throw result.body;
}

for (const blocklist of result.body.value ?? []) {
  console.log(`${blocklist.blocklistName}: ${blocklist.description}`);
}
typescript
const result = await client.path("/text/blocklists").get();

if (isUnexpected(result)) {
  throw result.body;
}

for (const blocklist of result.body.value ?? []) {
  console.log(`${blocklist.blocklistName}: ${blocklist.description}`);
}

Delete Blocklist

删除阻止列表

typescript
await client.path("/text/blocklists/{blocklistName}", "my-blocklist").delete();
typescript
await client.path("/text/blocklists/{blocklistName}", "my-blocklist").delete();

Harm Categories

有害内容类别

CategoryAPI TermDescription
Hate and Fairness
Hate
Discriminatory language targeting identity groups
Sexual
Sexual
Sexual content, nudity, pornography
Violence
Violence
Physical harm, weapons, terrorism
Self-Harm
SelfHarm
Self-injury, suicide, eating disorders
类别API术语描述
仇恨与公平
Hate
针对身份群体的歧视性语言
色情
Sexual
色情内容、裸露、色情作品
暴力
Violence
人身伤害、武器、恐怖主义
自我伤害
SelfHarm
自我伤害、自杀、饮食失调

Severity Levels

严重程度等级

LevelRiskRecommended Action
0SafeAllow
2LowReview or allow with warning
4MediumBlock or require human review
6HighBlock immediately
Output Types:
  • FourSeverityLevels
    (default): Returns 0, 2, 4, 6
  • EightSeverityLevels
    : Returns 0-7
等级风险建议操作
0安全允许
2审核或附带警告后允许
4阻止或要求人工审核
6立即阻止
输出类型:
  • FourSeverityLevels
    (默认):返回0、2、4、6
  • EightSeverityLevels
    :返回0-7

Content Moderation Helper

内容审核辅助函数

typescript
import ContentSafetyClient, { 
  isUnexpected, 
  TextCategoriesAnalysisOutput 
} from "@azure-rest/ai-content-safety";

interface ModerationResult {
  isAllowed: boolean;
  flaggedCategories: string[];
  maxSeverity: number;
  blocklistMatches: string[];
}

async function moderateContent(
  client: ReturnType<typeof ContentSafetyClient>,
  text: string,
  maxAllowedSeverity = 2,
  blocklistNames: string[] = []
): Promise<ModerationResult> {
  const result = await client.path("/text:analyze").post({
    body: { text, blocklistNames, haltOnBlocklistHit: false }
  });

  if (isUnexpected(result)) {
    throw result.body;
  }

  const flaggedCategories = result.body.categoriesAnalysis
    .filter(c => (c.severity ?? 0) > maxAllowedSeverity)
    .map(c => c.category!);

  const maxSeverity = Math.max(
    ...result.body.categoriesAnalysis.map(c => c.severity ?? 0)
  );

  const blocklistMatches = (result.body.blocklistsMatch ?? [])
    .map(m => m.blocklistItemText!);

  return {
    isAllowed: flaggedCategories.length === 0 && blocklistMatches.length === 0,
    flaggedCategories,
    maxSeverity,
    blocklistMatches
  };
}
typescript
import ContentSafetyClient, { 
  isUnexpected, 
  TextCategoriesAnalysisOutput 
} from "@azure-rest/ai-content-safety";

interface ModerationResult {
  isAllowed: boolean;
  flaggedCategories: string[];
  maxSeverity: number;
  blocklistMatches: string[];
}

async function moderateContent(
  client: ReturnType<typeof ContentSafetyClient>,
  text: string,
  maxAllowedSeverity = 2,
  blocklistNames: string[] = []
): Promise<ModerationResult> {
  const result = await client.path("/text:analyze").post({
    body: { text, blocklistNames, haltOnBlocklistHit: false }
  });

  if (isUnexpected(result)) {
    throw result.body;
  }

  const flaggedCategories = result.body.categoriesAnalysis
    .filter(c => (c.severity ?? 0) > maxAllowedSeverity)
    .map(c => c.category!);

  const maxSeverity = Math.max(
    ...result.body.categoriesAnalysis.map(c => c.severity ?? 0)
  );

  const blocklistMatches = (result.body.blocklistsMatch ?? [])
    .map(m => m.blocklistItemText!);

  return {
    isAllowed: flaggedCategories.length === 0 && blocklistMatches.length === 0,
    flaggedCategories,
    maxSeverity,
    blocklistMatches
  };
}

API Endpoints

API端点

OperationMethodPath
Analyze TextPOST
/text:analyze
Analyze ImagePOST
/image:analyze
Create/Update BlocklistPATCH
/text/blocklists/{blocklistName}
List BlocklistsGET
/text/blocklists
Delete BlocklistDELETE
/text/blocklists/{blocklistName}
Add Blocklist ItemsPOST
/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems
List Blocklist ItemsGET
/text/blocklists/{blocklistName}/blocklistItems
Remove Blocklist ItemsPOST
/text/blocklists/{blocklistName}:removeBlocklistItems
操作方法路径
分析文本POST
/text:analyze
分析图片POST
/image:analyze
创建/更新阻止列表PATCH
/text/blocklists/{blocklistName}
列出阻止列表GET
/text/blocklists
删除阻止列表DELETE
/text/blocklists/{blocklistName}
添加阻止列表条目POST
/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems
列出阻止列表条目GET
/text/blocklists/{blocklistName}/blocklistItems
删除阻止列表条目POST
/text/blocklists/{blocklistName}:removeBlocklistItems

Key Types

关键类型

typescript
import ContentSafetyClient, {
  isUnexpected,
  AnalyzeTextParameters,
  AnalyzeImageParameters,
  TextCategoriesAnalysisOutput,
  ImageCategoriesAnalysisOutput,
  TextBlocklist,
  TextBlocklistItem
} from "@azure-rest/ai-content-safety";
typescript
import ContentSafetyClient, {
  isUnexpected,
  AnalyzeTextParameters,
  AnalyzeImageParameters,
  TextCategoriesAnalysisOutput,
  ImageCategoriesAnalysisOutput,
  TextBlocklist,
  TextBlocklistItem
} from "@azure-rest/ai-content-safety";

Best Practices

最佳实践

  1. Always use isUnexpected() - Type guard for error handling
  2. Set appropriate thresholds - Different categories may need different severity thresholds
  3. Use blocklists for domain-specific terms - Supplement AI detection with custom rules
  4. Log moderation decisions - Keep audit trail for compliance
  5. Handle edge cases - Empty text, very long text, unsupported image formats
  1. 始终使用isUnexpected() - 用于错误处理的类型守卫
  2. 设置合适的阈值 - 不同类别可能需要不同的严重程度阈值
  3. 针对特定领域术语使用阻止列表 - 用自定义规则补充AI检测
  4. 记录审核决策 - 保留审计跟踪以符合合规要求
  5. 处理边缘情况 - 空文本、超长文本、不支持的图片格式

When to Use

适用场景

This skill is applicable to execute the workflow or actions described in the overview.
本工具适用于执行概述中描述的工作流或操作。