-
In Azure Foundry, under “Use with Foundry” → Foundry, select the aiportal foundry project
- From there, click “Go to Foundry Portal”

-
Once in the Foundry, select “Models and Endpoints” in the left side menu

- Click Deploy Model → Deploy Base Model

- Select the model from the list of providers, click confirm and then click deploy

- if you are not able to deploy a model make sure you have:
- Requested GPT-5 access
- Have enough quota; If you need quota, you can request more model quota
- Once deployed, you should be taken to the model’s screen (if not, select the model from the list of model deployments)

4a. Take note of:
Endpoint
The “azure endpoint” in the right code screen
(in the above it is: https://demo-aiportal.cognitiveservices.azure.com/)
Deployment
The “Name” in the left table (in the above it is: gpt-5.4)
Model Deployment Quota
The “Rate Limit” in the lower left table (in the above it is: 150,000)
- Once the model has been added to Azure and the values have been noted, add the provider to your AI Portal using Adding / Updating a ChatGPT Provider in AI Portal