OpenAI
LogicGrid talks to the OpenAI Chat Completions API directly — no official OpenAI SDK is required.
- API keys: platform.openai.com/api-keys
- Models: platform.openai.com/docs/models
- Pricing: openai.com/api/pricing
Use it
There are two equivalent ways to instantiate the OpenAI LLM client.
Option 1 — static factory (recommended)
using LogicGrid.Core.Llm;
var llm = LlmClientBase.OpenAi(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
model: "gpt-4o-mini");
| Parameter | Type | Default | Notes |
|---|---|---|---|
apiKey | string | (required) | OpenAI secret key, sent as a Bearer header. Never hardcode it. |
model | string | "gpt-4o-mini" | Any model exposed via the OpenAI Chat Completions API. |
Option 2 — direct construction
using LogicGrid.Core.Providers;
var llm = new OpenAiClient(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
defaultModel: "gpt-4o-mini");
| Parameter | Type | Default | Notes |
|---|---|---|---|
apiKey | string | (required) | Same as the factory's apiKey. |
defaultModel | string | "gpt-4o" | The model used when the agent or call site doesn't override it. |
The factory and the constructor produce equivalent clients. Use direct construction when you need an injected HttpClient (for retries, proxies, or testing). The caller is responsible for setting the Authorization: Bearer header.
Models currently available
LogicGrid doesn't restrict the model name — anything OpenAI exposes through the Chat Completions API will work. Common picks:
| Model | Notes |
|---|---|
gpt-4o | Strongest general model. |
gpt-4o-mini | Cheap, fast, capable — a good default. |
gpt-4-turbo | Earlier flagship; still capable. |
gpt-3.5-turbo | Cheapest legacy chat model. |
o1-mini / o1 | Reasoning models — slower, stronger on hard problems. |
For the live list including pinned dated versions (gpt-4o-2024-11-20,
etc.), see the OpenAI models page.
Tool calling
OpenAI supports native tool calling. Opt in on agents that need it:
protected override IToolCallingStrategy ToolCallingStrategy
=> new NativeToolCallingStrategy();
Native is generally more reliable than prompt schema on gpt-4o and
newer.
Full example — calculator tool over OpenAI
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
using LogicGrid.Core.Tools;
using LogicGrid.Tools.Tools;
var llm = LlmClientBase.OpenAi(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
model: "gpt-4o-mini");
IAgent math = new Agent<string>(
name: "Mathlete",
description: "Solves arithmetic.",
systemPrompt: "Use the calculator tool when the user asks for a number.",
llm: llm,
tools: new ToolBase[] { new CalculatorTool() },
toolCallingStrategy: new NativeToolCallingStrategy());
Console.WriteLine(
await math.RunAsync(
input: "What is (17 * 23) + 91?",
ctx: new AgentContext().WithLogging()));
09:14:02.118 [INF] [Mathlete] started
09:14:02.420 [INF] [Mathlete] tool call → calculator { "expression": "(17 * 23) + 91" }
09:14:02.430 [INF] [Mathlete] tool result | 482
09:14:03.180 [INF] [Mathlete] completed | output: (17 * 23) + 91 = 482.
(17 * 23) + 91 = 482.
Background: Tool calling strategy.
Embeddings
Same two-way pattern as the LLM client.
Option 1 — static factory (recommended)
using LogicGrid.Memory.Embeddings;
var embedder = EmbeddingClientBase.OpenAi(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
model: "text-embedding-3-small");
| Parameter | Type | Default | Notes |
|---|---|---|---|
apiKey | string | (required) | OpenAI secret key. |
model | string | "text-embedding-3-small" | Any OpenAI embedding model: text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002. |
dimensions | int | 1536 | Vector size. Defaults match text-embedding-3-small. Use 3072 for -3-large. |
Option 2 — direct construction
Mirrors OpenAiClient exactly: apiKey first, then defaultModel.
using LogicGrid.Memory.Embeddings;
var embedder = new OpenAiEmbeddingClient(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
defaultModel: "text-embedding-3-small",
dimensions: 1536);
| Parameter | Type | Default | Notes |
|---|---|---|---|
apiKey | string | (required) | OpenAI secret key. Sent as Bearer header. |
defaultModel | string | "text-embedding-3-small" | Embedding model used when the call site doesn't override it. |
dimensions | int | 1536 | Vector size. Use 3072 for text-embedding-3-large. |
A second overload accepts a custom HttpClient for retries, proxies, or DI — the caller is responsible for setting the Authorization: Bearer header:
using System.Net.Http;
using LogicGrid.Memory.Embeddings;
var http = new HttpClient();
http.DefaultRequestHeaders.Add("Authorization",
$"Bearer {Environment.GetEnvironmentVariable("OPENAI_API_KEY")}");
var embedder = new OpenAiEmbeddingClient(
httpClient: http,
defaultModel: "text-embedding-3-small",
dimensions: 1536);
| Parameter | Type | Default | Notes |
|---|---|---|---|
httpClient | HttpClient | (required) | Pre-configured client. Caller sets the Authorization header. |
defaultModel | string | "text-embedding-3-small" | Same as above. |
dimensions | int | 1536 | Same as above. |
Cost tracking
OpenAiClient.Pricing returns per-token rates for known models. After
each call, the admin reads LlmResponse.Usage and converts it to a
CostEstimate. See Tracing for how to read
total spend per run. Live rates:
openai.com/api/pricing.
API key handling
Never hardcode. Use environment variables, dotnet user-secrets, or
a managed secret store. LogicGrid sends the key to OpenAI as a
Bearer Authorization header and does not log it.
Troubleshooting
401 Unauthorized— wrong or missing API key.429 Too Many Requests— rate-limited. Lower the parallelism or upgrade your tier.model not found— either the model name is wrong, or your account doesn't have access to it. Verify on the OpenAI dashboard.