Quickstart
- Supported target frameworks: .NET Standard 2.0, .NET 6, or .NET 8.
- An LLM endpoint. Built-in support for
Ollama,
OpenAI,
Anthropic,
Gemini,
Azure OpenAI,
AWS Bedrock, and any
OpenAI-compatible endpoint (vLLM, TEI, LM Studio, DeepSeek, Groq).
You can also plug in a custom provider by implementing
LlmClientBase.
LogicGrid is provider-agnostic. Swap one line to switch providers — see Providers.
1. Install
dotnet new console -n MyAgentApp
cd MyAgentApp
dotnet add package LogicGrid.Core
LogicGrid.Core is the only package you need to get started. It
covers agents, admins, all orchestration patterns, all LLM providers,
events, logging, and tracing. Add more packages when you need
them:
| Package | Purpose |
|---|---|
LogicGrid.Core | Agents, admins, providers, orchestration |
LogicGrid.Tools | Built-in tools (calculator, HTTP, web search) |
LogicGrid.Mcp | Model Context Protocol client |
LogicGrid.Memory | Vector stores, embeddings, agent memory |
LogicGrid.Rag | RAG pipeline, hybrid search, document loaders |
2. Your first agent
Replace the contents of Program.cs:
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Ollama("llama3.2");
IAgent agent = new Agent<string>(
name: "Helper",
description: "Answers questions concisely.",
systemPrompt: "Answer in one short sentence.",
llm: llm);
var ctx = new AgentContext();
Console.WriteLine(await agent.RunAsync("Capital of France?", ctx));
Paris.
That's the whole flow: pick an LLM, build an agent with a name and a
system prompt, call RunAsync.
3. Add observability
LogicGrid ships logging and tracing as fluent extensions on AgentContext — designed for simplicity:
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
using LogicGrid.Core.Logging;
var llm = LlmClientBase.Ollama("llama3.2");
IAgent agent = new Agent<string>(
name: "Helper",
description: "Answers questions concisely.",
systemPrompt: "Answer in one short sentence.",
llm: llm);
var ctx = new AgentContext()
.WithLogging()
.WithTracing(out var trace);
var result = await agent.RunAsync("Capital of France?", ctx);
Console.WriteLine($"\nResult : {result}");
Console.WriteLine($"Tokens : {trace.TotalTokenUsage.TotalTokens}");
Console.WriteLine($"Cost : ${trace.TotalCost.TotalCostUsd:0.0000}");
Console.WriteLine($"Time : {trace.Duration.TotalMilliseconds:0}ms");
09:14:02.118 [INF] [a3f2c891] [Helper] started | input: Capital of France?
09:14:02.120 [DBG] [a3f2c891] [Helper] LLM call started | model: llama3.2
09:14:03.398 [DBG] [a3f2c891] [Helper] LLM call completed | 1278ms | 18 tokens
09:14:03.402 [INF] [a3f2c891] [Helper] completed | output: Paris. | 1284ms | 18 tokens
Result : Paris.
Tokens : 18
Cost : $0.0000
Time : 1284ms
WithLogging provides live console feedback, while WithTracing captures a comprehensive telemetry snapshot for analysis — Spans, token usage, cost, retries, and tool calls.
Logger output can be tuned via LogicGridLoggerOptions (verbosity, sinks, what fields to show)
— see Logger options.
4. Orchestrate with an admin
An admin coordinates several agents. LogicGrid provides several implementations — including
SequentialAdmin, GroupChatAdmin, GraphAdmin, ParallelAdmin,
MapReduceAdmin, and ReflexionAdmin. each exposing a standard RunAsync method. The example
below uses SequentialAdmin:
using LogicGrid.Core.Admins;
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Ollama("llama3.2");
IAgent summary = new Agent<string>(
name: "Summary",
description: "Summarises text in 2-3 sentences.",
systemPrompt: "You are a concise summariser. Answer in 2-3 sentences.",
llm: llm);
IAgent review = new Agent<string>(
name: "Review",
description: "Scores a summary 1-10.",
systemPrompt: "Score the summary from 1 to 10 and explain in one sentence.",
llm: llm);
var admin = new SequentialAdmin<string, string>(
name: "Editorial",
llmClient: llm,
agents: new[] { summary, review });
var ctx = new AgentContext().WithLogging();
var output = await admin.RunAsync(
input: "The James Webb Space Telescope launched in December 2021. " +
"It observes infrared light from the earliest galaxies and " +
"studies exoplanet atmospheres.");
Console.WriteLine($"\n{output}");
09:14:11.080 [INF] [b7d4e210] Run started — admin: Editorial | task: The James Webb Space Telescope launched in December 2021. …
09:14:11.220 [INF] [b7d4e210] [Summary] started | input: The James Webb Space Telescope launched in December 2021. …
09:14:14.402 [INF] [b7d4e210] [Summary] completed | output: The James Webb Space Telescope, launched in 2021, observes infrared light … | 3182ms | 188 tokens
09:14:14.430 [INF] [b7d4e210] [Review] started | input: The James Webb Space Telescope, launched in 2021, observes infrared light …
09:14:16.871 [INF] [b7d4e210] [Review] completed | output: 9/10. Concise and accurate; could mention exoplanet atmospheres explicitly. | 2441ms | 102 tokens
09:14:16.872 [INF] [b7d4e210] Run completed — 2 agents, 2 LLM calls | 5792ms
9/10. Concise and accurate; could mention exoplanet atmospheres explicitly.
Where next
- Orchestration — Sequential, Group chat, Graph, Parallel, Map-reduce, Reflexion. Each page has a diagram and a runnable example.
- Tools — give agents access to web search, HTTP, math, JSON extraction, custom tools: Tools.
- Memory + RAG — embed documents, retrieve them, ask questions: RAG pipeline.
- Customise — when you need to override prompt rendering, output validation, or retry behaviour: Advanced.
- A different provider — Providers.