Admins
An admin orchestrates a group of agents. Where an agent is a single LLM-driven worker, an admin coordinates many of them — one after the other, in parallel, in a debate, on a graph, or in a feedback loop.
Admins vs. agents
| Agent | Admin | |
|---|---|---|
| Class | Agent<T> / AgentBase<T> | AdminBase<TIn, TOut> (use one of the built-in admins) |
| Holds message history? | No — reset on every RunAsync call | Yes — persists across the entire run |
| Calls the LLM directly? | Yes, every call | Yes, when picking the next agent or aggregating output |
| Owns sub-workers? | No | Yes — its IAgent list |
An admin's MessageHistory lives for the whole run. That's how a
GroupChatAdmin lets agents see what previous agents said. An agent's
history is scoped to a single call — agents never see the bigger
picture unless the admin passes it in.
Pick the right admin
| Admin | Pattern | Use it when… |
|---|---|---|
SequentialAdmin | Sequential | The pipeline is a fixed chain — A → B → C. The simplest, most predictable orchestration. |
GroupChatAdmin | Group chat | The next speaker should be chosen dynamically by an LLM. Best for open-ended collaboration. Stops when the LLM emits DONE, when the agent named by the constructor's finalAgentName is selected, or when MaxLoops is reached. |
GraphAdmin | Graph | You need explicit branching, conditional edges, or loops. Use this when control flow matters. |
ParallelAdmin | Parallel | Same input, multiple agents, run concurrently. Optionally aggregated by a final agent. |
MapReduceAdmin | Map-reduce | A list of inputs to process with the same agent, then reduce. |
ReflexionAdmin | Reflexion | Actor-critic: one agent produces, another evaluates, the actor retries with feedback. |
A sequential example
using LogicGrid.Core.Admins;
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Ollama("llama3.2");
IAgent summary = new Agent<string>(
"Summary",
"Summarises text in 2-3 sentences.",
"You are a concise summariser. Answer in 2-3 sentences.",
llm);
IAgent review = new Agent<string>(
"Review",
"Scores a summary 1-10.",
"Score the summary 1-10 and explain in one sentence.",
llm);
var admin = new SequentialAdmin<string, string>(
name: "Editorial",
llmClient: llm,
agents: new[] { summary, review });
var output = await admin.RunAsync(
"The James Webb Space Telescope launched in December 2021. " +
"It observes infrared light from the earliest galaxies and " +
"studies exoplanet atmospheres.");
9/10. Concise and accurate; could mention exoplanet atmospheres explicitly.
The admin runs summary first, feeds its output into review, and
returns the final string.
AdminOptions reference
All admins accept AdminOptions. The defaults are sensible — start
with null (the default) and only override what you need.
public sealed class AdminOptions
{
public int MaxLoops { get; set; } = 10;
public LlmOptions LlmOptions { get; set; } = new LlmOptions();
public decimal? MaxBudgetUsd { get; set; } = null;
public float BudgetWarningThreshold { get; set; } = 0.8f;
public int MaxParallelism { get; set; } = 0;
}
| Option | Default | Effect |
|---|---|---|
MaxLoops | 10 | Hard cap on agent-selection loops (group chat / graph). For group chat with a finalAgentName set, the final agent is force-called once on timeout. See MaxLoops safety net |
LlmOptions | new() | Generation options for the admin's own reasoning calls (e.g. picking the next agent). |
MaxBudgetUsd | null | Stop the run with BudgetExceededException when the accumulated cost crosses this limit. See Cost & budget. |
BudgetWarningThreshold | 0.8 | Fraction of MaxBudgetUsd at which BudgetWarningEvent fires. |
MaxParallelism | 0 | Caps how many agents run concurrently during a parallel fan-out. 0 = unlimited. Honoured only by ParallelAdmin and MapReduceAdmin — Sequential / GroupChat / Graph / Reflexion never run agents in parallel and ignore this value. |
Cost tracking
Admins accumulate token usage and cost on every LLM call. Trace gives you totals after the run:
var ctx = new AgentContext()
.WithTracing(out var trace);
var output = await admin.RunAsync("…");
Console.WriteLine($"LLM calls : {trace.TotalLlmCalls}");
Console.WriteLine($"Tokens : {trace.TotalTokenUsage.TotalTokens}");
Console.WriteLine($"Cost : ${trace.TotalCost.TotalCostUsd:0.0000}");
LLM calls : 4
Tokens : 896
Cost : $0.0042
If MaxBudgetUsd is set on AdminOptions, the admin stops the moment
spend crosses the limit and the run throws BudgetExceededException.
Composing multiple admins
Admins are themselves composable. Because every admin's RunAsync
takes input and returns output, you can chain admin runs in code,
or wrap one admin's RunAsync inside an agent so that another admin
treats it as a sub-step. A complete pipeline often looks like:
- A map-reduce admin processes a batch of raw inputs.
- A parallel admin fans the digest out to several independent reviewers.
- A reflexion admin polishes the final output until a critic approves.
using LogicGrid.Core.Admins;
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Ollama("llama3.2");
// 1) Per-email summariser → daily digest (map-reduce)
IAgent perEmail = new Agent<string>(
name: "PerEmail",
description: "Summarises one email.",
systemPrompt: "Summarise this email in one sentence.",
llm: llm);
IAgent digest = new Agent<string>(
name: "Digest",
description: "Combines summaries.",
systemPrompt: "Combine the summaries into a 4-bullet digest.",
llm: llm);
var inbox = new MapReduceAdmin<string>(
name: "InboxDigest",
llmClient: llm,
mapAgent: perEmail,
reduceAgent: digest);
// 2) Three parallel reviewers + synthesiser (parallel)
IAgent ops = new Agent<string>("Ops", "Ops impact.", "...", llm);
IAgent product = new Agent<string>("Product", "Product impact.", "...", llm);
IAgent legal = new Agent<string>("Legal", "Legal impact.", "...", llm);
IAgent merger = new Agent<string>("Merger", "Merges reviews.", "...", llm);
var review = new ParallelAdmin<string, string>(
name: "TripleReview",
llmClient: llm,
agents: new[] { ops, product, legal },
aggregator: merger);
// 3) Actor + critic (reflexion)
IAgent actor = new Agent<string>("Actor", "Drafts the brief.", "...", llm);
IAgent critic = new Agent<string>("Critic", "Approves brief.", "...", llm);
var polish = new ReflexionAdmin<string, string>(
name: "Polish",
llmClient: llm,
actor: actor,
critic: critic);
// Run the pipeline
var emails = await LoadTodayEmailsAsync();
var summary = await inbox.RunAsync(emails);
var review2 = await review.RunAsync(summary);
var brief = await polish.RunAsync(review2);
Console.WriteLine(brief);
The same AgentContext (and event bus) can be passed to each step if
you want a single trace covering the whole pipeline.
Custom admins
When the six built-ins don't cover the control flow you need,
implement your own admin by deriving from AdminBase<TInput, TOutput>
and overriding Name, LlmClient, Agents, and RunAsync. You get
persistent MessageHistory, cost accumulation, and event firing
without extra plumbing. See
Overriding AdminBase<T>.