Azure OpenAI
LogicGrid talks to Azure OpenAI over its native per-deployment URL
layout, using the api-key header for authentication. The wire
format is the same OpenAI Chat Completions schema, but the model
you call is actually a deployment name you create in the Azure
portal — it maps to an underlying model (gpt-4o, gpt-4o-mini,
etc.).
- Resource & deployment management: Azure OpenAI Studio
- Models and regions: learn.microsoft.com/azure/ai-services/openai/concepts/models
- Pricing: azure.microsoft.com/pricing/details/cognitive-services/openai-service
Deployment name vs. model name
In Azure OpenAI you create a deployment in the portal that maps a
model (e.g. gpt-4o) to a logical name (e.g.
my-gpt-4o-deployment). LogicGrid's deploymentName argument is the
logical name; pass the underlying model id as underlyingModel so
the client can look up per-token pricing for cost tracking.
Use it
There are three equivalent ways to instantiate the Azure OpenAI client.
Option 1 — static factory (recommended)
using LogicGrid.Core.Llm;
var llm = LlmClientBase.AzureOpenAi(
endpoint: "https://my-resource.openai.azure.com",
apiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!,
deploymentName: "my-gpt-4o-deployment",
apiVersion: "2024-10-21",
underlyingModel: "gpt-4o"); // optional — used for cost lookup
| Parameter | Type | Default | Notes |
|---|---|---|---|
endpoint | string | (required) | Azure OpenAI resource URL, e.g. https://my-resource.openai.azure.com. |
apiKey | string | (required) | Resource API key, sent as the api-key header. |
deploymentName | string | (required) | The deployment name from the Azure portal — not the model id. |
apiVersion | string | "2024-10-21" | Azure OpenAI API version. Older versions may not see newer deployments. |
underlyingModel | string? | null | Optional model id (e.g. gpt-4o) for per-token pricing lookup. |
Option 2 — direct construction
using LogicGrid.Core.Providers;
var llm = new AzureOpenAiClient(
endpoint: "https://my-resource.openai.azure.com",
apiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!,
deploymentName: "my-gpt-4o-deployment",
apiVersion: "2024-10-21",
underlyingModel: "gpt-4o");
Identical parameters to the factory. Use direct construction when you need an injected HttpClient explained below (for retries, proxies, or testing).
Option 3 — injected HttpClient
using LogicGrid.Core.Providers;
var llm = new AzureOpenAiClient(
httpClient: myHttpClient,
endpoint: "https://my-resource.openai.azure.com",
deploymentName: "my-gpt-4o-deployment",
apiVersion: "2024-10-21",
underlyingModel: "gpt-4o");
This overload takes no apiKey — the caller is responsible for setting the api-key header on the supplied HttpClient (typically via IHttpClientFactory, a DelegatingHandler, or test fakes). Use it when auth is managed outside the client, or in unit tests with a mocked transport.
Tool calling
Azure OpenAI supports the same native tool-call protocol as OpenAI.
The default PromptSchemaStrategy works everywhere; switch to
NativeToolCallingStrategy when targeting gpt-4o and similar
models. See Tool calling strategy.
Troubleshooting
401— wrongapi-key, or the key doesn't belong to the resource at the given endpoint.404—deploymentNamedoesn't exist in this resource. Create it in Azure OpenAI Studio.DeploymentNotFounddespite a successful create — checkapiVersion. Older API versions don't see newer deployments.