Skip to main content

Tool calling strategy

How a tool call gets sent to the LLM and parsed out of its response is configurable via IToolCallingStrategy. Two implementations served in the box: PromptSchemaStrategy (the default) and NativeToolCallingStrategy.

The two strategies

PromptSchemaStrategy (default)

Embeds tool schemas in the system prompt and parses tool calls out of the LLM's text response. Works with every provider, including local models that don't speak any native tool-calling protocol.

SYSTEM:
You have these tools:
- name: calculator
description: …
parameters: { type: "object", properties: { expression: …} }

Reply with EITHER a tool call (JSON) or a final answer.
Tool call format:
{"tool": "<name>", "args": { … }}

The LLM either replies with the JSON tool-call shape or with a normal answer. The framework parses the response, runs the tool if present, and feeds the result back as a regular message.

NativeToolCallingStrategy

Uses the provider's structured tool-calling API (tool_calls on OpenAI, tool_use on Anthropic, function calls on Gemini). More reliable on capable models because the provider enforces the JSON shape.

Use it on OpenAI, Anthropic, Gemini, or any OpenAI-compatible endpoint that exposes the tools and tool_calls fields. Don't use it on Ollama unless the specific model supports the protocol.

Picking one

Use PromptSchemaStrategy when…Use NativeToolCallingStrategy when…
Targeting Ollama, llama.cpp, or older local models.Targeting OpenAI, Anthropic, or Gemini.
You want the same agent code to work across all providers.You want the model's first-class tool support.
You're debugging tool calls — the prompt and parser are easy to inspect.Reliability beats provider portability.
You're using a smaller / quantised local model that struggles with the native protocol.You're using a flagship hosted model that handles tools well.

How to set it

Pass it to Agent<T>:

using LogicGrid.Core.ToolCalling;

IAgent agent = new Agent<string>(
"Assistant",
"Uses native tool calling.",
"Use the available tools.",
llm,
tools: new ToolBase[] { new CalculatorTool() },
toolCallingStrategy: new NativeToolCallingStrategy());

Or override on AgentBase<T>:

public sealed class MyAgent : AgentBase<string>
{
protected override IToolCallingStrategy ToolCallingStrategy
=> new NativeToolCallingStrategy();
/* … */
}

When tool calls misbehave

Symptom → likely cause:

  • LLM ignores the tools and answers from training data. System prompt doesn't tell it when to call. Add a sentence like "Use the calculator tool whenever the answer is a number." On native strategy, the issue can also be the tool's Description being vague — the LLM uses descriptions to decide when to call.
  • LLM calls the tool with wrong field names. On PromptSchemaStrategy, the schema may not match what the LLM inferred from the prompt. Tighten the [Description] attributes or switch to native.
  • LLM never returns a final answer. It's stuck in a tool-call loop. Add MaxLoops on the admin or a stop condition in the prompt ("After at most 2 tool calls, give the final answer.").
  • Local model emits malformed JSON for tool calls. The PromptSchemaStrategy parser is lenient but not magical. Try a larger model, lower temperature, or fewer tools.

Implementing your own strategy

Implement IToolCallingStrategy:

public interface IToolCallingStrategy
{
void PrepareRequest(
IList<LlmMessage> messages,
IList<ToolSchema> tools,
LlmOptions options);

ToolCallResult? ParseToolCall(LlmResponse response);

LlmMessage BuildToolResultMessage(
string toolName,
string toolCallId,
string result);
}

You'd build your own when:

  • A provider has a tool-calling protocol the built-ins don't cover.
  • You want to wrap calls in a security layer (sign args, check ACLs) before they're sent.
  • You want to log/redact tool args for compliance.

For most apps the two built-in strategies cover everything.