Agent skill
azure-ai-contentsafety-py
Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.
Stars
28,421
Forks
4,766
Install this agent skill to your Project
npx add-skill https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/azure-ai-contentsafety-py
SKILL.md
Azure AI Content Safety SDK for Python
Detect harmful user-generated and AI-generated content in applications.
Installation
bash
pip install azure-ai-contentsafety
Environment Variables
bash
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
CONTENT_SAFETY_KEY=<your-api-key>
Authentication
API Key
python
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
import os
client = ContentSafetyClient(
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"])
)
Entra ID
python
from azure.ai.contentsafety import ContentSafetyClient
from azure.identity import DefaultAzureCredential
client = ContentSafetyClient(
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
credential=DefaultAzureCredential()
)
Analyze Text
python
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
from azure.core.credentials import AzureKeyCredential
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
request = AnalyzeTextOptions(text="Your text content to analyze")
response = client.analyze_text(request)
# Check each category
for category in [TextCategory.HATE, TextCategory.SELF_HARM,
TextCategory.SEXUAL, TextCategory.VIOLENCE]:
result = next((r for r in response.categories_analysis
if r.category == category), None)
if result:
print(f"{category}: severity {result.severity}")
Analyze Image
python
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
from azure.core.credentials import AzureKeyCredential
import base64
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
# From file
with open("image.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
request = AnalyzeImageOptions(
image=ImageData(content=image_data)
)
response = client.analyze_image(request)
for result in response.categories_analysis:
print(f"{result.category}: severity {result.severity}")
Image from URL
python
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
request = AnalyzeImageOptions(
image=ImageData(blob_url="https://example.com/image.jpg")
)
response = client.analyze_image(request)
Text Blocklist Management
Create Blocklist
python
from azure.ai.contentsafety import BlocklistClient
from azure.ai.contentsafety.models import TextBlocklist
from azure.core.credentials import AzureKeyCredential
blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist = TextBlocklist(
blocklist_name="my-blocklist",
description="Custom terms to block"
)
result = blocklist_client.create_or_update_text_blocklist(
blocklist_name="my-blocklist",
options=blocklist
)
Add Block Items
python
from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem
items = AddOrUpdateTextBlocklistItemsOptions(
blocklist_items=[
TextBlocklistItem(text="blocked-term-1"),
TextBlocklistItem(text="blocked-term-2")
]
)
result = blocklist_client.add_or_update_blocklist_items(
blocklist_name="my-blocklist",
options=items
)
Analyze with Blocklist
python
from azure.ai.contentsafety.models import AnalyzeTextOptions
request = AnalyzeTextOptions(
text="Text containing blocked-term-1",
blocklist_names=["my-blocklist"],
halt_on_blocklist_hit=True
)
response = client.analyze_text(request)
if response.blocklists_match:
for match in response.blocklists_match:
print(f"Blocked: {match.blocklist_item_text}")
Severity Levels
Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):
python
from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType
request = AnalyzeTextOptions(
text="Your text",
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)
Harm Categories
| Category | Description |
|---|---|
Hate |
Attacks based on identity (race, religion, gender, etc.) |
Sexual |
Sexual content, relationships, anatomy |
Violence |
Physical harm, weapons, injury |
SelfHarm |
Self-injury, suicide, eating disorders |
Severity Scale
| Level | Text Range | Image Range | Meaning |
|---|---|---|---|
| 0 | Safe | Safe | No harmful content |
| 2 | Low | Low | Mild references |
| 4 | Medium | Medium | Moderate content |
| 6 | High | High | Severe content |
Client Types
| Client | Purpose |
|---|---|
ContentSafetyClient |
Analyze text and images |
BlocklistClient |
Manage custom blocklists |
Best Practices
- Use blocklists for domain-specific terms
- Set severity thresholds appropriate for your use case
- Handle multiple categories — content can be harmful in multiple ways
- Use halt_on_blocklist_hit for immediate rejection
- Log analysis results for audit and improvement
- Consider 8-severity mode for finer-grained control
- Pre-moderate AI outputs before showing to users
When to Use
This skill is applicable to execute the workflow or actions described in the overview.
Didn't find tool you were looking for?