Ever wished your AI could calculate something or fetch weather data mid-conversation? That’s exactly what LangChain tools are for!
If you’ve dabbled with LangChain for .NET, you’ve likely marveled at how chains string logic together. But when you need to empower your language model with extra skills—like calling APIs or running a C# method—tools are your secret weapon. This post unpacks how tools work in LangChain for .NET, showing you how to build, connect, and use them in a production-worthy app.
What Are LangChain Tools?
In LangChain, a “Tool” represents a function or external service the agent can use to enhance its reasoning. Think of tools as plugins for your LLM, letting it reach beyond the language model. These tools become essential when you want to build agentic behavior—where an LLM can dynamically decide what function to call and when.
Tools can:
- Perform calculations
- Access databases
- Call APIs (e.g., weather, finance, translation)
- Run internal business logic
They work by pairing a function with metadata describing its name, description, input schema, and return type—all things the LLM needs to use it properly.
Creating a Tool Class in C#
LangChain for .NET provides a Tool
base class to standardize how tools are exposed. Here’s how you can build a simple calculator tool:
public class CalculatorTool : Tool
{
public CalculatorTool() : base(
name: "calculator",
description: "Performs basic arithmetic operations (add, subtract, multiply, divide).",
inputSchema: new() { { "expression", typeof(string) } })
{ }
public override async Task<string> InvokeAsync(Dictionary<string, object> input)
{
var expression = input["expression"]?.ToString();
var result = new DataTable().Compute(expression, "");
return result.ToString();
}
}
What’s happening here:
- The constructor provides metadata that the LLM uses to decide when and how to use the tool.
- The
inputSchema
defines the structure the input should conform to—this is especially important when tools are invoked by agents, not humans. - The method
InvokeAsync
takes a dictionary of inputs and performs the action. In this case, we’re usingDataTable.Compute
as a simple (though limited) way to parse and evaluate math expressions.
Note: In production, you may want to replace
DataTable.Compute
with a more robust expression evaluator to avoid security risks.
You can now test this tool standalone or pass it to an agent for dynamic use. It’s a clean, encapsulated approach to extending LLM abilities.
Connecting External APIs as LangChain Tools
Want to plug in real-world data? Let’s wire up a weather API using the same pattern:
public class WeatherTool : Tool
{
public WeatherTool() : base(
name: "weather",
description: "Returns the current temperature for a given city.",
inputSchema: new() { { "city", typeof(string) } })
{ }
public override async Task<string> InvokeAsync(Dictionary<string, object> input)
{
var city = Uri.EscapeDataString(input["city"].ToString());
using var client = new HttpClient();
var json = await client.GetStringAsync($"https://api.weatherapi.com/v1/current.json?key=YOUR_KEY&q={city}");
dynamic data = JsonConvert.DeserializeObject(json);
return $"{data.location.name}: {data.current.temp_c} °C";
}
}
Explanation: This tool wraps a REST API, deserializes JSON using Newtonsoft, and formats the temperature output. It’s now callable by the LLM.
Here’s what you need to pay attention to:
- Use
HttpClient
as a singleton or via dependency injection to avoid socket exhaustion in real apps. - Replace
YOUR_KEY
with your actual API key, and consider securing it using secrets management. - Add error handling (try-catch) around the HTTP request and JSON parsing to make the tool robust.
You now have a tool that pulls live data and can be used in complex, agent-driven workflows.
Using Tools Manually or with Agents
You can use tools in two ways:
Manual Invocation
This is useful during testing or when you want full control over the input/output.
var tool = new CalculatorTool();
var result = await tool.InvokeAsync(new Dictionary<string, object> { { "expression", "3 + 7" } });
Console.WriteLine(result); // 10
You can integrate this with your app’s logic, wrapping it in an API or background task.
Agent Integration
This is the real magic. Tools are passed into an agent that knows how to pick and use them dynamically.
var tools = new List<Tool> { new CalculatorTool(), new WeatherTool() };
var agent = new OpenAIFunctionAgent(llm: myOpenAiModel, tools: tools);
var reply = await agent.GetCompletionAsync("What's the weather in Berlin and 10 * 3?");
Console.WriteLine(reply);
Here’s what happens behind the scenes:
- The agent parses the prompt.
- It identifies the need to call both tools.
- It sequentially invokes
WeatherTool
andCalculatorTool
, passing their results back into the conversation.
Tip: When using multiple tools, make sure each tool has a clear and distinct name and behavior. Avoid overlapping input schemas.
Best Practices When Implementing Tools
- Validate Inputs: Always check that inputs are non-null, expected, and sanitized.
- Handle Exceptions Gracefully: Wrap external calls in try-catch and return user-friendly messages.
- Use Logging: Capture inputs, outputs, and any errors for observability.
- Avoid Blocking Calls: Use
async/await
for all I/O operations. - Limit Side Effects: Ensure tools are idempotent or stateless when possible.
FAQ: Common Questions About LangChain Tools in .NET
Yes, but you’ll need to design your tools as services and register them via DI container.
Yes! Just define them in the inputSchema
dictionary.
Implement guard clauses or schema validation using libraries like FluentValidation.
Yes, but use caution. It’s better to emit commands that your app reacts to, ensuring separation of concerns.
Conclusion: Tools Turn LLMs into Smart Collaborators
Tools in LangChain for .NET turn your LLM from a passive text generator into an active participant. Whether it’s fetching data or executing logic, tools give your AI real-world superpowers.
Want your agents to do things, not just talk? Start building tools today and bridge the gap between AI conversation and real execution. And if you’re ready to explore agent memory or multistep workflows, stay tuned for the next article in our series!