> azure-ai-vision

Expert knowledge for Azure AI Vision development including decision making, limits & quotas, configuration, integrations & coding patterns, and deployment. Use when using Image Analysis, Read OCR containers, Blob Storage image access, smart-crop thumbnails, or video frame analysis, and other Azure AI Vision related development tasks. Not for Azure AI services (use microsoft-foundry-tools), Azure AI Custom Vision (use azure-custom-vision), Azure AI Video Indexer (use azure-video-indexer), Azure A

fetch
$curl "https://skillshub.wtf/MicrosoftDocs/Agent-Skills/azure-ai-vision?format=md"
SKILL.mdazure-ai-vision

Azure AI Vision Skill

This skill provides expert guidance for Azure AI Vision. Covers decision making, limits & quotas, configuration, integrations & coding patterns, and deployment. It combines local quick-reference content with remote documentation fetching capabilities.

How to Use This Skill

IMPORTANT for Agent: Use the Category Index below to locate relevant sections. For categories with line ranges (e.g., L35-L120), use read_file with the specified lines. For categories with file links (e.g., [security.md](security.md)), use read_file on the linked reference file

IMPORTANT for Agent: If metadata.generated_at is more than 3 months old, suggest the user pull the latest version from the repository. If mcp_microsoftdocs tools are not available, suggest the user install it: Installation Guide

This skill requires network access to fetch documentation content:

  • Preferred: Use mcp_microsoftdocs:microsoft_docs_fetch with query string from=learn-agent-skill. Returns Markdown.
  • Fallback: Use fetch_webpage with query string from=learn-agent-skill&accept=text/markdown. Returns Markdown.

Category Index

CategoryLinesDescription
Decision MakingL33-L39Guides for planning and executing migrations and upgrades between Azure Vision Image Analysis and Read OCR versions/containers, including breaking changes and app update steps.
Limits & QuotasL40-L51Limits, thresholds, and taxonomies for Image Analysis: category lists, adult content scores, object/people detection constraints, smart-crop behavior, and OCR language support.
ConfigurationL52-L57Configuring Vision Read OCR containers and setting up Azure Blob Storage access for image retrieval, including environment settings, networking, and storage connection details.
Integrations & Coding PatternsL58-L68How to call and configure Azure Vision/Read APIs and SDKs for OCR, embeddings, thumbnails, background removal, domain models, and live video frame analysis.
DeploymentL69-L72Installing, configuring, and running the Azure AI Vision Read OCR container locally or on-premises, including prerequisites, deployment steps, and runtime settings.

Decision Making

TopicURL
Plan migration from Azure Vision Image Analysishttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/migration-options
Migrate to Azure Vision Read OCR container v3.xhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/read-container-migration-guide
Upgrade applications from Read v2.x to v3.0https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/upgrade-api-versions

Limits & Quotas

TopicURL
Reference taxonomy categories for Azure Visionhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/category-taxonomy
Understand Image Analysis 3.2 categorization taxonomy limitshttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-categorizing-images
Interpret adult content detection scores and thresholdshttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-detecting-adult-content
Use smart-cropped thumbnails with Image Analysis 4.0https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-generate-thumbnails-40
Use object detection and understand feature limitshttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-object-detection
Understand Image Analysis 4.0 object detection limitshttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-object-detection-40
Use people detection and understand its limitshttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-people-detection
Check supported languages for Azure Vision OCRhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/language-support

Configuration

TopicURL
Configure Azure Vision Read OCR containershttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/computer-vision-resource-container-config
Configure Azure Blob Storage for Vision image retrievalhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/blob-storage-search

Integrations & Coding Patterns

TopicURL
Call domain-specific models with Azure Visionhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-detecting-domain-content
Analyze live video frames with Azure Vision APIhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/analyze-video
Call and configure Image Analysis 3.2 APIhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image
Call and configure Image Analysis 4.0 APIhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40
Call and configure Azure Vision Read v3.2 APIhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-read-api
Use multimodal embeddings for image retrievalhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/image-retrieval
Use OCR client libraries for text extractionhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/quickstarts-sdk/client-library

Deployment

TopicURL
Install and run Azure Vision Read OCR containerhttps://learn.microsoft.com/en-us/azure/ai-services/computer-vision/computer-vision-how-to-install-containers

> related_skills --same-repo

> microsoft-foundry

Expert knowledge for Microsoft Foundry (aka Azure AI Foundry) development including troubleshooting, best practices, decision making, architecture & design patterns, limits & quotas, security, configuration, integrations & coding patterns, and deployment. Use when building Foundry agents with Azure OpenAI, vector search/RAG, Sora video, realtime audio, or MCP/LangChain APIs, and other Microsoft Foundry related development tasks. Not for Microsoft Foundry Classic (use microsoft-foundry-classic),

> microsoft-foundry-tools

Expert knowledge for Microsoft Foundry Tools (aka Azure AI services, Azure Cognitive Services) development including best practices, decision making, architecture & design patterns, limits & quotas, security, configuration, integrations & coding patterns, and deployment. Use when using Content Understanding analyzers, Content Moderator APIs, Foundry containers, VNet/Key Vault security, or Entra auth, and other Microsoft Foundry Tools related development tasks. Not for Microsoft Foundry (use micr

> microsoft-foundry-local

Expert knowledge for Microsoft Foundry Local (aka Azure AI Foundry Local) development including troubleshooting, best practices, decision making, configuration, and integrations & coding patterns. Use when using Foundry Local CLI, chat/transcription APIs, tools, OpenAI/LangChain clients, or upgrading legacy SDKs, and other Microsoft Foundry Local related development tasks. Not for Microsoft Foundry (use microsoft-foundry), Microsoft Foundry Classic (use microsoft-foundry-classic), Microsoft Foun

> microsoft-foundry-classic

Expert knowledge for Microsoft Foundry Classic (aka Azure AI Foundry classic) development including troubleshooting, best practices, decision making, architecture & design patterns, limits & quotas, security, configuration, integrations & coding patterns, and deployment. Use when building Foundry agents with RAG, tools, evaluators, Azure OpenAI, VNet/Private Link, or CI/CD deployments, and other Microsoft Foundry Classic related development tasks. Not for Microsoft Foundry (use microsoft-foundry

┌ stats

installs/wk0
░░░░░░░░░░
github stars525
██████████
first seenMar 17, 2026
└────────────

┌ repo

MicrosoftDocs/Agent-Skills
by MicrosoftDocs
└────────────

┌ tags

└────────────