Sequential admin
SequentialAdmin<TInput, TOutput> runs a fixed pipeline of agents.
Agents are executed in the order they appear in the agents list;
each agent's output becomes the next agent's input. The simplest
orchestration pattern — and often the right one.
Example
using LogicGrid.Core.Admins;
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Ollama("llama3.2");
IAgent summary = new Agent<string>(
name: "Summary",
description: "Summarises text.",
systemPrompt: "You are a concise summariser. Answer in 2-3 sentences.",
llm: llm);
IAgent review = new Agent<string>(
name: "Review",
description: "Scores a summary 1-10.",
systemPrompt: "Score the summary 1-10 and explain in one sentence.",
llm: llm);
var admin = new SequentialAdmin<string, string>(
name: "Editorial",
llmClient: llm,
agents: new[] { summary, review });
var ctx = new AgentContext().WithLogging();
var output = await admin.RunAsync(
input: "The James Webb Space Telescope launched in December 2021. " +
"It observes infrared light from the earliest galaxies and " +
"studies exoplanet atmospheres.",
ctx: ctx);
Console.WriteLine($"\n{output}");
09:14:11.080 [INF] [b7d4e210] Run started — admin: Editorial | task: The James Webb Space Telescope launched in December 2021. …
09:14:11.220 [INF] [b7d4e210] [Summary] started | input: The James Webb Space Telescope launched in December 2021. …
09:14:14.402 [INF] [b7d4e210] [Summary] completed | output: The James Webb Space Telescope, launched in 2021, observes infrared light … | 3182ms | 188 tokens
09:14:14.430 [INF] [b7d4e210] [Review] started | input: The James Webb Space Telescope, launched in 2021, observes infrared light …
09:14:16.871 [INF] [b7d4e210] [Review] completed | output: 9/10. Concise and accurate; could mention exoplanet atmospheres explicitly. | 2441ms | 102 tokens
09:14:16.872 [INF] [b7d4e210] Run completed — 2 agents, 2 LLM calls | 5792ms
9/10. Concise and accurate; could mention exoplanet atmospheres explicitly.
Constructor
public SequentialAdmin(
string name,
LlmClientBase llmClient,
IList<IAgent> agents,
AdminOptions? options = null,
IAgentEventBus? eventBus = null)
The eventBus parameter is optional. ctx.WithLogging() and
ctx.WithTracing() create and attach a bus for you; pass an explicit
bus only when you also want to subscribe your own
event handlers.
Use it when
- The pipeline is fixed and linear. The order doesn't depend on the LLM's judgment.
- You want predictable cost — N agents = N LLM calls (plus retries).
- You want easy debugging — every step is visible in the log in the order it executed.
Don't use it when
- The next step depends on the previous step's content → Group chat or Graph.
- You're processing a list of inputs with the same agent → Map-reduce.
- You need actor-critic feedback → Reflexion.
Common pitfalls
- A typed agent mid-chain feeds JSON to the next prompt. Agent
inputs are always strings —
IAgent.RunAsync(string input, …)is the only signature. But when anAgent<MyPoco>runs throughIAgent.RunAsync(which is whatSequentialAdmincalls), its typed output is serialised back to a JSON string, and that JSON becomes the next agent's prompt input. If the next agent's prompt was written for natural language, the JSON will throw it off. Either keep the whole chain onAgent<string>, or only use a typed agent as the last step —SequentialAdmindeserialises the final agent's JSON output intoTOutputfor you, so it never re-enters a prompt. Background: Return type & typed output. - No skip or branch. Every agent in the list runs. If you need
optional steps or conditional flow, use
GraphAdmin.