Skip to main content

OpenAI

LogicGrid talks to the OpenAI Chat Completions API directly — no official OpenAI SDK is required.

Use it

There are two equivalent ways to instantiate the OpenAI LLM client.

using LogicGrid.Core.Llm;

var llm = LlmClientBase.OpenAi(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
model: "gpt-4o-mini");
ParameterTypeDefaultNotes
apiKeystring(required)OpenAI secret key, sent as a Bearer header. Never hardcode it.
modelstring"gpt-4o-mini"Any model exposed via the OpenAI Chat Completions API.

Option 2 — direct construction

using LogicGrid.Core.Providers;

var llm = new OpenAiClient(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
defaultModel: "gpt-4o-mini");
ParameterTypeDefaultNotes
apiKeystring(required)Same as the factory's apiKey.
defaultModelstring"gpt-4o"The model used when the agent or call site doesn't override it.

The factory and the constructor produce equivalent clients. Use direct construction when you need an injected HttpClient (for retries, proxies, or testing). The caller is responsible for setting the Authorization: Bearer header.

Models currently available

LogicGrid doesn't restrict the model name — anything OpenAI exposes through the Chat Completions API will work. Common picks:

ModelNotes
gpt-4oStrongest general model.
gpt-4o-miniCheap, fast, capable — a good default.
gpt-4-turboEarlier flagship; still capable.
gpt-3.5-turboCheapest legacy chat model.
o1-mini / o1Reasoning models — slower, stronger on hard problems.

For the live list including pinned dated versions (gpt-4o-2024-11-20, etc.), see the OpenAI models page.

Tool calling

OpenAI supports native tool calling. Opt in on agents that need it:

protected override IToolCallingStrategy ToolCallingStrategy
=> new NativeToolCallingStrategy();

Native is generally more reliable than prompt schema on gpt-4o and newer.

Full example — calculator tool over OpenAI

using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
using LogicGrid.Core.Tools;
using LogicGrid.Tools.Tools;

var llm = LlmClientBase.OpenAi(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
model: "gpt-4o-mini");

IAgent math = new Agent<string>(
name: "Mathlete",
description: "Solves arithmetic.",
systemPrompt: "Use the calculator tool when the user asks for a number.",
llm: llm,
tools: new ToolBase[] { new CalculatorTool() },
toolCallingStrategy: new NativeToolCallingStrategy());

Console.WriteLine(
await math.RunAsync(
input: "What is (17 * 23) + 91?",
ctx: new AgentContext().WithLogging()));
09:14:02.118 [INF] [Mathlete] started
09:14:02.420 [INF] [Mathlete] tool call → calculator { "expression": "(17 * 23) + 91" }
09:14:02.430 [INF] [Mathlete] tool result | 482
09:14:03.180 [INF] [Mathlete] completed | output: (17 * 23) + 91 = 482.

(17 * 23) + 91 = 482.

Background: Tool calling strategy.

Embeddings

Same two-way pattern as the LLM client.

using LogicGrid.Memory.Embeddings;

var embedder = EmbeddingClientBase.OpenAi(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
model: "text-embedding-3-small");
ParameterTypeDefaultNotes
apiKeystring(required)OpenAI secret key.
modelstring"text-embedding-3-small"Any OpenAI embedding model: text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002.
dimensionsint1536Vector size. Defaults match text-embedding-3-small. Use 3072 for -3-large.

Option 2 — direct construction

Mirrors OpenAiClient exactly: apiKey first, then defaultModel.

using LogicGrid.Memory.Embeddings;

var embedder = new OpenAiEmbeddingClient(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
defaultModel: "text-embedding-3-small",
dimensions: 1536);
ParameterTypeDefaultNotes
apiKeystring(required)OpenAI secret key. Sent as Bearer header.
defaultModelstring"text-embedding-3-small"Embedding model used when the call site doesn't override it.
dimensionsint1536Vector size. Use 3072 for text-embedding-3-large.

A second overload accepts a custom HttpClient for retries, proxies, or DI — the caller is responsible for setting the Authorization: Bearer header:

using System.Net.Http;
using LogicGrid.Memory.Embeddings;

var http = new HttpClient();
http.DefaultRequestHeaders.Add("Authorization",
$"Bearer {Environment.GetEnvironmentVariable("OPENAI_API_KEY")}");

var embedder = new OpenAiEmbeddingClient(
httpClient: http,
defaultModel: "text-embedding-3-small",
dimensions: 1536);
ParameterTypeDefaultNotes
httpClientHttpClient(required)Pre-configured client. Caller sets the Authorization header.
defaultModelstring"text-embedding-3-small"Same as above.
dimensionsint1536Same as above.

Cost tracking

OpenAiClient.Pricing returns per-token rates for known models. After each call, the admin reads LlmResponse.Usage and converts it to a CostEstimate. See Tracing for how to read total spend per run. Live rates: openai.com/api/pricing.

API key handling

Never hardcode. Use environment variables, dotnet user-secrets, or a managed secret store. LogicGrid sends the key to OpenAI as a Bearer Authorization header and does not log it.

Troubleshooting

  • 401 Unauthorized — wrong or missing API key.
  • 429 Too Many Requests — rate-limited. Lower the parallelism or upgrade your tier.
  • model not found — either the model name is wrong, or your account doesn't have access to it. Verify on the OpenAI dashboard.