azure-ai-contentsafety-ts
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAzure AI Content Safety REST SDK for TypeScript
适用于TypeScript的Azure AI Content Safety REST SDK
Analyze text and images for harmful content with customizable blocklists.
通过可自定义的阻止列表分析文本和图片中的有害内容。
Installation
安装
bash
npm install @azure-rest/ai-content-safety @azure/identity @azure/core-authbash
npm install @azure-rest/ai-content-safety @azure/identity @azure/core-authEnvironment Variables
环境变量
bash
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
CONTENT_SAFETY_KEY=<api-key>bash
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
CONTENT_SAFETY_KEY=<api-key>Authentication
身份验证
Important: This is a REST client. is a function, not a class.
ContentSafetyClient重要提示:这是一个REST客户端。是一个函数,而非类。
ContentSafetyClientAPI Key
API密钥
typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { AzureKeyCredential } from "@azure/core-auth";
const client = ContentSafetyClient(
process.env.CONTENT_SAFETY_ENDPOINT!,
new AzureKeyCredential(process.env.CONTENT_SAFETY_KEY!)
);typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { AzureKeyCredential } from "@azure/core-auth";
const client = ContentSafetyClient(
process.env.CONTENT_SAFETY_ENDPOINT!,
new AzureKeyCredential(process.env.CONTENT_SAFETY_KEY!)
);DefaultAzureCredential
DefaultAzureCredential
typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { DefaultAzureCredential } from "@azure/identity";
const client = ContentSafetyClient(
process.env.CONTENT_SAFETY_ENDPOINT!,
new DefaultAzureCredential()
);typescript
import ContentSafetyClient from "@azure-rest/ai-content-safety";
import { DefaultAzureCredential } from "@azure/identity";
const client = ContentSafetyClient(
process.env.CONTENT_SAFETY_ENDPOINT!,
new DefaultAzureCredential()
);Analyze Text
分析文本
typescript
import ContentSafetyClient, { isUnexpected } from "@azure-rest/ai-content-safety";
const result = await client.path("/text:analyze").post({
body: {
text: "Text content to analyze",
categories: ["Hate", "Sexual", "Violence", "SelfHarm"],
outputType: "FourSeverityLevels" // or "EightSeverityLevels"
}
});
if (isUnexpected(result)) {
throw result.body;
}
for (const analysis of result.body.categoriesAnalysis) {
console.log(`${analysis.category}: severity ${analysis.severity}`);
}typescript
import ContentSafetyClient, { isUnexpected } from "@azure-rest/ai-content-safety";
const result = await client.path("/text:analyze").post({
body: {
text: "要分析的文本内容",
categories: ["Hate", "Sexual", "Violence", "SelfHarm"],
outputType: "FourSeverityLevels" // 或 "EightSeverityLevels"
}
});
if (isUnexpected(result)) {
throw result.body;
}
for (const analysis of result.body.categoriesAnalysis) {
console.log(`${analysis.category}: 严重程度 ${analysis.severity}`);
}Analyze Image
分析图片
Base64 Content
Base64内容
typescript
import { readFileSync } from "node:fs";
const imageBuffer = readFileSync("./image.png");
const base64Image = imageBuffer.toString("base64");
const result = await client.path("/image:analyze").post({
body: {
image: { content: base64Image }
}
});
if (isUnexpected(result)) {
throw result.body;
}
for (const analysis of result.body.categoriesAnalysis) {
console.log(`${analysis.category}: severity ${analysis.severity}`);
}typescript
import { readFileSync } from "node:fs";
const imageBuffer = readFileSync("./image.png");
const base64Image = imageBuffer.toString("base64");
const result = await client.path("/image:analyze").post({
body: {
image: { content: base64Image }
}
});
if (isUnexpected(result)) {
throw result.body;
}
for (const analysis of result.body.categoriesAnalysis) {
console.log(`${analysis.category}: 严重程度 ${analysis.severity}`);
}Blob URL
Blob URL
typescript
const result = await client.path("/image:analyze").post({
body: {
image: { blobUrl: "https://storage.blob.core.windows.net/container/image.png" }
}
});typescript
const result = await client.path("/image:analyze").post({
body: {
image: { blobUrl: "https://storage.blob.core.windows.net/container/image.png" }
}
});Blocklist Management
阻止列表管理
Create Blocklist
创建阻止列表
typescript
const result = await client
.path("/text/blocklists/{blocklistName}", "my-blocklist")
.patch({
contentType: "application/merge-patch+json",
body: {
description: "Custom blocklist for prohibited terms"
}
});
if (isUnexpected(result)) {
throw result.body;
}
console.log(`Created: ${result.body.blocklistName}`);typescript
const result = await client
.path("/text/blocklists/{blocklistName}", "my-blocklist")
.patch({
contentType: "application/merge-patch+json",
body: {
description: "用于禁用术语的自定义阻止列表"
}
});
if (isUnexpected(result)) {
throw result.body;
}
console.log(`已创建: ${result.body.blocklistName}`);Add Items to Blocklist
向阻止列表添加条目
typescript
const result = await client
.path("/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems", "my-blocklist")
.post({
body: {
blocklistItems: [
{ text: "prohibited-term-1", description: "First blocked term" },
{ text: "prohibited-term-2", description: "Second blocked term" }
]
}
});
if (isUnexpected(result)) {
throw result.body;
}
for (const item of result.body.blocklistItems ?? []) {
console.log(`Added: ${item.blocklistItemId}`);
}typescript
const result = await client
.path("/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems", "my-blocklist")
.post({
body: {
blocklistItems: [
{ text: "prohibited-term-1", description: "第一个被阻止的术语" },
{ text: "prohibited-term-2", description: "第二个被阻止的术语" }
]
}
});
if (isUnexpected(result)) {
throw result.body;
}
for (const item of result.body.blocklistItems ?? []) {
console.log(`已添加: ${item.blocklistItemId}`);
}Analyze with Blocklist
结合阻止列表进行分析
typescript
const result = await client.path("/text:analyze").post({
body: {
text: "Text that might contain blocked terms",
blocklistNames: ["my-blocklist"],
haltOnBlocklistHit: false
}
});
if (isUnexpected(result)) {
throw result.body;
}
// Check blocklist matches
if (result.body.blocklistsMatch) {
for (const match of result.body.blocklistsMatch) {
console.log(`Blocked: "${match.blocklistItemText}" from ${match.blocklistName}`);
}
}typescript
const result = await client.path("/text:analyze").post({
body: {
text: "可能包含被阻止术语的文本",
blocklistNames: ["my-blocklist"],
haltOnBlocklistHit: false
}
});
if (isUnexpected(result)) {
throw result.body;
}
// 检查阻止列表匹配项
if (result.body.blocklistsMatch) {
for (const match of result.body.blocklistsMatch) {
console.log(`已阻止: "${match.blocklistItemText}" 来自 ${match.blocklistName}`);
}
}List Blocklists
列出阻止列表
typescript
const result = await client.path("/text/blocklists").get();
if (isUnexpected(result)) {
throw result.body;
}
for (const blocklist of result.body.value ?? []) {
console.log(`${blocklist.blocklistName}: ${blocklist.description}`);
}typescript
const result = await client.path("/text/blocklists").get();
if (isUnexpected(result)) {
throw result.body;
}
for (const blocklist of result.body.value ?? []) {
console.log(`${blocklist.blocklistName}: ${blocklist.description}`);
}Delete Blocklist
删除阻止列表
typescript
await client.path("/text/blocklists/{blocklistName}", "my-blocklist").delete();typescript
await client.path("/text/blocklists/{blocklistName}", "my-blocklist").delete();Harm Categories
有害内容类别
| Category | API Term | Description |
|---|---|---|
| Hate and Fairness | | Discriminatory language targeting identity groups |
| Sexual | | Sexual content, nudity, pornography |
| Violence | | Physical harm, weapons, terrorism |
| Self-Harm | | Self-injury, suicide, eating disorders |
| 类别 | API术语 | 描述 |
|---|---|---|
| 仇恨与公平 | | 针对身份群体的歧视性语言 |
| 色情 | | 色情内容、裸露、色情作品 |
| 暴力 | | 人身伤害、武器、恐怖主义 |
| 自我伤害 | | 自我伤害、自杀、饮食失调 |
Severity Levels
严重程度等级
| Level | Risk | Recommended Action |
|---|---|---|
| 0 | Safe | Allow |
| 2 | Low | Review or allow with warning |
| 4 | Medium | Block or require human review |
| 6 | High | Block immediately |
Output Types:
- (default): Returns 0, 2, 4, 6
FourSeverityLevels - : Returns 0-7
EightSeverityLevels
| 等级 | 风险 | 建议操作 |
|---|---|---|
| 0 | 安全 | 允许 |
| 2 | 低 | 审核或附带警告后允许 |
| 4 | 中 | 阻止或要求人工审核 |
| 6 | 高 | 立即阻止 |
输出类型:
- (默认):返回0、2、4、6
FourSeverityLevels - :返回0-7
EightSeverityLevels
Content Moderation Helper
内容审核辅助函数
typescript
import ContentSafetyClient, {
isUnexpected,
TextCategoriesAnalysisOutput
} from "@azure-rest/ai-content-safety";
interface ModerationResult {
isAllowed: boolean;
flaggedCategories: string[];
maxSeverity: number;
blocklistMatches: string[];
}
async function moderateContent(
client: ReturnType<typeof ContentSafetyClient>,
text: string,
maxAllowedSeverity = 2,
blocklistNames: string[] = []
): Promise<ModerationResult> {
const result = await client.path("/text:analyze").post({
body: { text, blocklistNames, haltOnBlocklistHit: false }
});
if (isUnexpected(result)) {
throw result.body;
}
const flaggedCategories = result.body.categoriesAnalysis
.filter(c => (c.severity ?? 0) > maxAllowedSeverity)
.map(c => c.category!);
const maxSeverity = Math.max(
...result.body.categoriesAnalysis.map(c => c.severity ?? 0)
);
const blocklistMatches = (result.body.blocklistsMatch ?? [])
.map(m => m.blocklistItemText!);
return {
isAllowed: flaggedCategories.length === 0 && blocklistMatches.length === 0,
flaggedCategories,
maxSeverity,
blocklistMatches
};
}typescript
import ContentSafetyClient, {
isUnexpected,
TextCategoriesAnalysisOutput
} from "@azure-rest/ai-content-safety";
interface ModerationResult {
isAllowed: boolean;
flaggedCategories: string[];
maxSeverity: number;
blocklistMatches: string[];
}
async function moderateContent(
client: ReturnType<typeof ContentSafetyClient>,
text: string,
maxAllowedSeverity = 2,
blocklistNames: string[] = []
): Promise<ModerationResult> {
const result = await client.path("/text:analyze").post({
body: { text, blocklistNames, haltOnBlocklistHit: false }
});
if (isUnexpected(result)) {
throw result.body;
}
const flaggedCategories = result.body.categoriesAnalysis
.filter(c => (c.severity ?? 0) > maxAllowedSeverity)
.map(c => c.category!);
const maxSeverity = Math.max(
...result.body.categoriesAnalysis.map(c => c.severity ?? 0)
);
const blocklistMatches = (result.body.blocklistsMatch ?? [])
.map(m => m.blocklistItemText!);
return {
isAllowed: flaggedCategories.length === 0 && blocklistMatches.length === 0,
flaggedCategories,
maxSeverity,
blocklistMatches
};
}API Endpoints
API端点
| Operation | Method | Path |
|---|---|---|
| Analyze Text | POST | |
| Analyze Image | POST | |
| Create/Update Blocklist | PATCH | |
| List Blocklists | GET | |
| Delete Blocklist | DELETE | |
| Add Blocklist Items | POST | |
| List Blocklist Items | GET | |
| Remove Blocklist Items | POST | |
| 操作 | 方法 | 路径 |
|---|---|---|
| 分析文本 | POST | |
| 分析图片 | POST | |
| 创建/更新阻止列表 | PATCH | |
| 列出阻止列表 | GET | |
| 删除阻止列表 | DELETE | |
| 添加阻止列表条目 | POST | |
| 列出阻止列表条目 | GET | |
| 删除阻止列表条目 | POST | |
Key Types
关键类型
typescript
import ContentSafetyClient, {
isUnexpected,
AnalyzeTextParameters,
AnalyzeImageParameters,
TextCategoriesAnalysisOutput,
ImageCategoriesAnalysisOutput,
TextBlocklist,
TextBlocklistItem
} from "@azure-rest/ai-content-safety";typescript
import ContentSafetyClient, {
isUnexpected,
AnalyzeTextParameters,
AnalyzeImageParameters,
TextCategoriesAnalysisOutput,
ImageCategoriesAnalysisOutput,
TextBlocklist,
TextBlocklistItem
} from "@azure-rest/ai-content-safety";Best Practices
最佳实践
- Always use isUnexpected() - Type guard for error handling
- Set appropriate thresholds - Different categories may need different severity thresholds
- Use blocklists for domain-specific terms - Supplement AI detection with custom rules
- Log moderation decisions - Keep audit trail for compliance
- Handle edge cases - Empty text, very long text, unsupported image formats
- 始终使用isUnexpected() - 用于错误处理的类型守卫
- 设置合适的阈值 - 不同类别可能需要不同的严重程度阈值
- 针对特定领域术语使用阻止列表 - 用自定义规则补充AI检测
- 记录审核决策 - 保留审计跟踪以符合合规要求
- 处理边缘情况 - 空文本、超长文本、不支持的图片格式
When to Use
适用场景
This skill is applicable to execute the workflow or actions described in the overview.
本工具适用于执行概述中描述的工作流或操作。