1. In Azure Foundry, under “Use with Foundry” → Foundry, select the aiportal foundry project

    1. From there, click “Go to Foundry Portal”

    image.png

  2. Once in the Foundry, select “Models and Endpoints” in the left side menu

image.png

  1. Click Deploy Model → Deploy Base Model

Screenshot 2026-03-23 at 10.51.37 AM.png

  1. Select the model from the list of providers, click confirm and then click deploy

Screenshot 2026-03-23 at 10.54.39 AM.png

  1. Once deployed, you should be taken to the model’s screen (if not, select the model from the list of model deployments)

image.png

4a. Take note of:

Endpoint The “azure endpoint” in the right code screen (in the above it is: https://dev-aiportal.services.ai.azure.com/openai/v1/)

Deployment:

The “Name” in the left table (in the above it is: grok-4-1-fast-reasoning)

Model Deployment Quota:

The “Rate Limit” in the lower left table (in the above it is: 50,000)

Once the model has been added to Azure and the values have been noted, add the provider to your AI Portal using Adding / Updating a Grok Provider in AI Portal