Skip to main content

Providers

LogicGrid is provider-agnostic. Use the same agent code across Ollama, OpenAI, Anthropic, Gemini, Azure, or AWS Bedrock by changing a single line.

Each provider is accessible via static factory methods on LlmClientBase. You can also instantiate the concrete client directly when you need fine-grained configuration (custom HTTP handler, base URL, headers), or implement LlmClientBase yourself if a provider not listed here.

Ollama (local)

Ollama runs LLMs on your own machine.

using LogicGrid.Core.Llm;

var llm = LlmClientBase.Ollama("llama3.2");

Make sure the model is available in your Ollama install — see the Ollama docs for installing and pulling models.

Custom base URL (Ollama running in Docker, on another host):

var llm = LlmClientBase.Ollama(
model: "llama3.2",
baseUrl: "http://192.168.1.50:11434");

See Ollama for the full provider reference.

OpenAI

var llm = LlmClientBase.OpenAi(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
model: "gpt-4o-mini");

See OpenAI for the full provider reference.

Anthropic Claude

var llm = LlmClientBase.Anthropic(
apiKey: Environment.GetEnvironmentVariable("ANTHROPIC_API_KEY")!,
model: "claude-3-5-haiku-20241022");

See Anthropic for the full provider reference.

Google Gemini

var llm = LlmClientBase.Gemini(
apiKey: Environment.GetEnvironmentVariable("GEMINI_API_KEY")!,
model: "gemini-1.5-flash");

See Gemini for the full provider reference.

Azure OpenAI

var llm = LlmClientBase.AzureOpenAi(
endpoint: "https://my-resource.openai.azure.com",
apiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_KEY")!,
deploymentName: "my-gpt-4o-deployment",
apiVersion: "2024-10-21",
underlyingModel: "gpt-4o");

See Azure OpenAI for the full provider reference.

AWS Bedrock

var llm = LlmClientBase.Bedrock(
accessKeyId: Environment.GetEnvironmentVariable("AWS_ACCESS_KEY_ID")!,
secretAccessKey: Environment.GetEnvironmentVariable("AWS_SECRET_ACCESS_KEY")!,
region: "us-east-1",
modelId: "anthropic.claude-3-5-haiku-20241022-v1:0");

See AWS Bedrock for the full provider reference.

OpenAI-compatible (vLLM, TEI, LM Studio, DeepSeek, Groq)

LogicGrid supports any service compatible with the OpenAI Chat Completions API, through OpenAI-compatible Client. Just point it at the right base URL.

// vLLM running locally
var llm = LlmClientBase.Compatible(
baseUrl: "http://localhost:8000/v1",
model: "Qwen/Qwen2.5-7B-Instruct");

// LM Studio
var llm = LlmClientBase.Compatible(
baseUrl: "http://localhost:1234/v1",
model: "lmstudio-community/Llama-3.2-3B-Instruct-GGUF");

// DeepSeek
var llm = LlmClientBase.Compatible(
baseUrl: "https://api.deepseek.com/v1",
model: "deepseek-chat",
apiKey: Environment.GetEnvironmentVariable("DEEPSEEK_KEY"));

// Groq
var llm = LlmClientBase.Compatible(
baseUrl: "https://api.groq.com/openai/v1",
model: "llama-3.3-70b-versatile",
apiKey: Environment.GetEnvironmentVariable("GROQ_KEY"));

Generation options (LlmOptions)

All providers accept a shared LlmOptions object to manage generation parameters. Configure these at the agent level via the constructor or override them per-call when using the LLM client directly.

PropertyDefaultNotes
Temperature0.70 = deterministic; 1 = creative.
MaxTokens2048Cap on response tokens. Provider-clamped.
TopP1.0Nucleus sampling.
StopnullList of stop sequences.
Model(constructor default)Override the model per call.

Example — a deterministic classifier and a creative writer using the same provider:

IAgent classifier = new Agent<string>(
name: "Classifier",
description: "Labels the input.",
systemPrompt: "Reply with one word: positive, negative, or neutral.",
llm: llm,
llmOptions: new LlmOptions { Temperature = 0.0, MaxTokens = 8 });

IAgent writer = new Agent<string>(
name: "Writer",
description: "Writes a creative reply.",
systemPrompt: "Reply in two evocative sentences.",
llm: llm,
llmOptions: new LlmOptions { Temperature = 0.9, MaxTokens = 200 });

Mixing providers in one app

Each agent can use a different provider. Use a fast, cost-effective model for classification and a powerful model for complex, high-stakes tasks.

var fast = LlmClientBase.Ollama("llama3.2");
var strong = LlmClientBase.Anthropic(
apiKey: Environment.GetEnvironmentVariable("ANTHROPIC_API_KEY")!,
model: "claude-3-5-sonnet-20241022");

IAgent classifier = new Agent<string>(
"Classifier", "Cheap, fast labeller.",
"Reply with one word: positive, negative, or neutral.",
fast);

IAgent writer = new Agent<string>(
"Writer", "Strong reply generator.",
"Write a thoughtful response based on the classifier's verdict.",
strong);

Streaming

Token-by-token streaming is not yet supported on any provider — LlmClientBase.CallAsync() returns the full response. Streaming is on the roadmap for the next release.

Multimodal

Multimodal input (images, audio, etc.) on providers that support it — Gemini, GPT-4o, Claude — is also planned for a future release. Today Agent<T> and LlmMessage carry text content only.

API keys

Credentials must be provided explicitly via constructors, factory methods, environment variables, dotnet user-secrets, or your preferred management service.