Google Gemini
LogicGrid talks to the Google Generative Language API directly using Gemini's REST endpoint. No Google SDK required.
- API keys (free tier available): aistudio.google.com
- Models: ai.google.dev/gemini-api/docs/models
- Pricing: ai.google.dev/pricing
Use it
There are two equivalent ways to instantiate the Gemini LLM client.
Option 1 — static factory (recommended)
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Gemini(
apiKey: Environment.GetEnvironmentVariable("GEMINI_API_KEY")!,
model: "gemini-1.5-flash");
| Parameter | Type | Default | Notes |
|---|---|---|---|
apiKey | string | (required) | Google Generative Language API key. Free tier available via AI Studio. |
model | string | "gemini-1.5-flash" | Any Gemini model id. |
Option 2 — direct construction
using LogicGrid.Core.Providers;
var llm = new GeminiClient(
apiKey: Environment.GetEnvironmentVariable("GEMINI_API_KEY")!,
defaultModel: "gemini-1.5-flash");
| Parameter | Type | Default | Notes |
|---|---|---|---|
apiKey | string | (required) | Same as the factory's apiKey. |
defaultModel | string | "gemini-1.5-flash" | The model used when the agent or call site doesn't override it. |
The factory and the constructor produce equivalent clients. Use direct construction when you need an injected HttpClient (for retries, proxies, or testing). The caller is responsible for setting the Authorization: Bearer header.
Models currently available
Common picks at the time of writing — always verify against Google's model page for the live list:
| Model | Notes |
|---|---|
gemini-2.0-flash-exp | Latest experimental. Strong at tool calling. |
gemini-1.5-pro | Strongest 1.5-series. 1M-token context window. |
gemini-1.5-flash | Fast, cheap, capable. |
gemini-1.5-flash-8b | Smallest, cheapest. |
Long context
Gemini 1.5 Pro accepts up to 1M tokens — enough for a small codebase or a full book in a single call. Useful for one-shot summarisation where you don't want to chunk and retrieve.
Tool calling
Gemini supports native tool calling. Opt in on agents that need it:
protected override IToolCallingStrategy ToolCallingStrategy
=> new NativeToolCallingStrategy();
Gemini's tool-call protocol is similar to OpenAI's; LogicGrid translates between the two transparently.
Full example — calculator tool over Gemini
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
using LogicGrid.Core.Tools;
using LogicGrid.Tools.Tools;
var llm = LlmClientBase.Gemini(
apiKey: Environment.GetEnvironmentVariable("GEMINI_API_KEY")!,
model: "gemini-1.5-flash");
IAgent math = new Agent<string>(
name: "Mathlete",
description: "Solves arithmetic.",
systemPrompt: "Use the calculator tool when the user asks for a number.",
llm: llm,
tools: new ToolBase[] { new CalculatorTool() });
Console.WriteLine(
await math.RunAsync(
input: "What is (17 * 23) + 91?",
ctx: new AgentContext().WithLogging()));
Background: Tool calling strategy.
Cost tracking
GeminiClient.Pricing returns per-token rates for known models. Live
rates: ai.google.dev/pricing.
Embedding models
Gemini ships its own embedding models (e.g. gemini-embedding-001) with
task-type hints like RETRIEVAL_QUERY and RETRIEVAL_DOCUMENT for
asymmetric retrieval. A native GeminiEmbeddingClient is planned for a
future release. Until then, point OpenAiCompatibleEmbeddingClient at
Gemini's OpenAI-compatible endpoint to use Gemini embeddings today.
Troubleshooting
API key not valid— generate a new one in AI Studio.Quota exceeded— you've hit the free-tier rate limit. Wait a minute or upgrade.SAFETYblocked — Gemini refused on safety grounds. Try rephrasing or check the safety configuration.