This assumes that an Azure deployment has already been created. If you’re adding a new Azure hosted model for the first time, start with Deploying a GPT Model in Microsoft Foundry

  1. From the Providers Screen (Admin Panel → Providers)
    1. Click on “Add Provider Config” or Click the pencil update an existing provider

image.png

  1. Fill in the values noted below, then click
Name Value
Provider adapter name azure (for OpenAI models on Azure)
Model The model that was deployed
Pretty Name Name that will show in the provider dropdown on the main page (usually ChatGPT)
Pretty Model The model that will show in the provider dropdown (usually the same as model)
Category Set the model as Internal (Within your existing data governance) or External (Outside your existing data governance)
Endpoint The endpoint. Usually:
https://{NAME}-aiportal.cognitiveservices.azure.com/openai/v1/

| | Deployment | The name of the deployment | | Model Deployment Quota | The rate limit for the model deployment | | Tags | Features of that provider disabled → will disable selection in the chat dropdown upload → Allow file uploads (discouraged for External models) dlp → enable dlp (encouraged for External models) hidden → do not display the selection in the chat dropdown | | RAG Search | OPTIONAL Set this to the Provider Config to a RAG Search Config. Responses from the model will include context from your RAG configuration. |

Screenshot 2026-03-11 at 4.07.16 PM.png