Parallel admin
ParallelAdmin<TInput, TOutput> runs every agent on the same input
concurrently, then optionally aggregates the results with a final
agent.
If you don't pass an aggregator, the admin joins the outputs with newlines.
Example — three parallel perspectives, aggregated
using LogicGrid.Core.Admins;
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Ollama("llama3.2");
IAgent optimist = new Agent<string>(
name: "Optimist",
description: "Argues for the upside.",
systemPrompt: "You are an optimist. Argue why this is a good idea, in 2 bullets.",
llm: llm);
IAgent pessimist = new Agent<string>(
name: "Pessimist",
description: "Argues for the downside.",
systemPrompt: "You are a pessimist. Argue why this is a bad idea, in 2 bullets.",
llm: llm);
IAgent realist = new Agent<string>(
name: "Realist",
description: "Stays grounded.",
systemPrompt: "You are a realist. Give 2 bullet-pointed practical concerns.",
llm: llm);
IAgent synth = new Agent<string>(
name: "Synthesiser",
description: "Merges perspectives into a recommendation.",
systemPrompt: "Read the three perspectives and produce a single recommendation in 3 sentences.",
llm: llm);
var admin = new ParallelAdmin<string, string>(
name: "ThreeBrains",
llmClient: llm,
agents: new[] { optimist, pessimist, realist },
aggregator: synth);
var ctx = new AgentContext().WithLogging();
var verdict = await admin.RunAsync(
input: "Should we migrate our auth service to a managed identity provider?");
Console.WriteLine($"\n{verdict}");
09:20:01.118 [INF] [c4d1a722] Run started — admin: ThreeBrains | task: Should we migrate our auth service to a managed identity provider?
09:20:01.250 [INF] [c4d1a722] Parallel run started — 3 agents: Optimist, Pessimist, Realist
09:20:04.901 [INF] [c4d1a722] [Optimist] completed | output: • Reduces ongoing security maintenance … | 3651ms | 142 tokens
09:20:05.118 [INF] [c4d1a722] [Pessimist] completed | output: • Vendor lock-in on critical path … | 3868ms | 138 tokens
09:20:05.402 [INF] [c4d1a722] [Realist] completed | output: • Migration takes longer than estimated … | 4152ms | 156 tokens
09:20:05.410 [INF] [c4d1a722] Parallel run completed — 3 agents | 4160ms
09:20:08.901 [INF] [c4d1a722] [Synthesiser] completed | output: Migrate, but in a phased rollout … | 3491ms | 198 tokens
09:20:08.910 [INF] [c4d1a722] Run completed — 4 agents, 4 LLM calls | 7792ms
Constructor
public ParallelAdmin(
string name,
LlmClientBase llmClient,
IList<IAgent> agents,
IAgent? aggregator = null,
AdminOptions? options = null,
IAgentEventBus? eventBus = null)
Typed aggregator output
Like every admin, ParallelAdmin<TInput, TOutput> accepts a typed
TOutput. When TOutput isn't string, the aggregator's response
is parsed as JSON and deserialized into your type.
public sealed class Verdict
{
public string Recommendation { get; set; } = "";
public IList<string> Risks { get; set; } = new List<string>();
}
IAgent synth = new Agent<Verdict>(
name: "Synthesiser",
description: "Merges perspectives into a structured verdict.",
systemPrompt: "Return JSON: { \"recommendation\": ..., \"risks\": [...] }",
llm: llm);
var admin = new ParallelAdmin<string, Verdict>(
name: "ThreeBrains",
llmClient: llm,
agents: new[] { optimist, pessimist, realist },
aggregator: synth);
Verdict v = await admin.RunAsync(
"Should we migrate our auth service to a managed identity provider?");
Console.WriteLine(v.Recommendation);
Use it when
- Each agent contributes an independent perspective on the same input.
- Latency matters — you want the wall-clock time of the slowest agent, not the sum of all agents.
- You'll combine the outputs at the end (aggregator) or just concatenate them (no aggregator).
Don't use it when
- One agent's output should feed the next → Sequential.
- You have a list of inputs to process with the same agent → Map-reduce.
An aggregator is required for typed output. Without one, the
admin returns the raw agent outputs joined with newlines — that
string isn't valid JSON, so JSON deserialisation will fail. If you
need typed output, supply an aggregator whose systemPrompt
instructs the LLM to emit matching JSON.
If your aggregator's input grows past the model's context window
(many agents, each producing a large response), you've outgrown
ParallelAdmin — switch to
Map-reduce's hierarchical pattern
to fan-in across multiple layers.
Bounding the burst with MaxParallelism
By default ParallelAdmin runs every agent in the list at the
same instant via Task.WhenAll. With three agents that's fine; with
thirty against a hosted provider it usually trips the provider's
rate limit. Set AdminOptions.MaxParallelism to cap how many agents
can run concurrently — the rest queue and start as earlier ones
finish:
var admin = new ParallelAdmin<string, string>(
name: "ThreeBrains",
llmClient: llm,
agents: new[] { optimist, pessimist, realist },
options: new AdminOptions { MaxParallelism = 4 });
MaxParallelism = 0 (the default) means unlimited.
Common pitfalls
- Token-rate limits. Hosted providers cap how much you can ask
for in a sliding 60-second window. The cap usually has two parts:
RPM (requests per minute — how many HTTP calls you make) and
TPM (tokens per minute — the sum of prompt + completion tokens
across those calls). When you cross either threshold, the
provider's response carries HTTP status
429 Too Many Requestsinstead of200 OK, and the LLM client throws. Three parallel calls is fine; thirty fired in the same instant easily blows past TPM (each agent's prompt + completion tokens add up fast) and may also trigger short-window RPM throttling. A sequential chain of the same thirty agents takes longer wall-clock but spreads the load across the 60-second window so it rarely trips. The fix is to setAdminOptions.MaxParallelismto a value your provider tier can absorb (4–8 is a common starting point) — see the section above.