Think ChatGPT just needs a string prompt? Think again — smart prompt design can make or break your AI app.
Modern .NET applications are increasingly integrating AI capabilities via models like OpenAI’s GPT. But slinging raw strings at a language model? That’s amateur hour. You need prompt engineering patterns that are reusable, composable, and testable.
In this post, I’ll show you how to build structured prompt templates, inject variables, and chain prompts dynamically in C#. Let’s engineer prompts like we do software: with patterns and principles.
Why Prompt Engineering Needs Structure
When I started integrating OpenAI into a .NET backend for a content generation tool, I noticed recurring issues:
- Prompt duplication across services.
- Spaghetti-style string concatenation.
- No reuse or testability.
Every prompt was a fragile string, hardcoded inside methods and duplicated in multiple places. If we needed to change the tone of the prompt or add new variables, it was a nightmare. Imagine trying to internationalize your prompts or A/B test different versions — impossible without structure.
The solution? Structured prompt templates and prompt chains. Think of them as the MVC of AI inputs — separation of concerns, encapsulation, and a path to testability.
Building Prompt Templates with Placeholders
Prompt templates are like email templates — you fill in dynamic values. Instead of writing this:
var prompt = $"Write a short story about a {animal} who learns {lesson}";
You use a dedicated structure:
public class PromptTemplate
{
public string Template { get; }
public PromptTemplate(string template)
{
Template = template;
}
public string Render(Dictionary<string, string> variables)
{
var result = Template;
foreach (var pair in variables)
{
result = result.Replace($"{{{{{pair.Key}}}}}", pair.Value);
}
return result;
}
}
This class lets you maintain your prompts as reusable artifacts. Want to support multiple languages? Just inject a different base string.
Example:
var template = new PromptTemplate("Write a short story about a {{animal}} who learns {{lesson}}.");
var prompt = template.Render(new Dictionary<string, string>
{
["animal"] = "rabbit",
["lesson"] = "kindness"
});
Console.WriteLine(prompt); // "Write a short story about a rabbit who learns kindness."
You can extend this by:
- Throwing exceptions for missing variables.
- Using regex for better matching.
- Supporting nested templates (e.g., partials).
Composing Prompt Chains
Sometimes one prompt isn’t enough. For example, you might:
- Summarize a paragraph.
- Generate a title for the summary.
- Tag it with keywords.
Instead of a giant prompt that tries to do all of this, you build a prompt chain:
public class PromptChain
{
private readonly List<Func<string, string>> _steps = new();
public PromptChain AddStep(Func<string, string> step)
{
_steps.Add(step);
return this;
}
public string Execute(string input)
{
return _steps.Aggregate(input, (current, step) => step(current));
}
}
Usage Example:
var chain = new PromptChain()
.AddStep(text => new PromptTemplate("Summarize this: {{text}}")
.Render(new() { ["text"] = text }))
.AddStep(summary => new PromptTemplate("Create a title for: {{summary}}")
.Render(new() { ["summary"] = summary }))
.AddStep(title => $"Final Output: {title}");
var result = chain.Execute("The quick brown fox jumps over the lazy dog.");
Console.WriteLine(result);
Each step transforms the input and passes it forward. This approach enables:
- Better separation of concerns.
- Easier testing of individual steps.
- Plug-and-play composition (replace or re-order steps).
You can even inject API calls, retries, and loggers at each step for full control.
Making Prompts Testable
Prompt logic should be testable like any business rule. Unit tests protect against regressions.
Basic Unit Test:
[Fact]
public void Prompt_Should_Render_Correctly()
{
var template = new PromptTemplate("Translate '{{word}}' to French.");
var result = template.Render(new() { ["word"] = "apple" });
Assert.Equal("Translate 'apple' to French.", result);
}
Chain Test:
[Fact]
public void PromptChain_Should_Generate_Title()
{
var chain = new PromptChain()
.AddStep(input => $"Summarize: {input}")
.AddStep(summary => $"Title: {summary}");
var result = chain.Execute("Learn how to code in C#.");
Assert.Equal("Title: Summarize: Learn how to code in C#.", result);
}
Write these tests early — you’ll be glad when prompts become more complex. Consider snapshot tests for large prompt outputs.
FAQ: Common Prompt Engineering Questions
Because you lose reuse, testability, and clarity. Templates eliminate duplication and improve maintainability.
Absolutely. Wrap your AddStep
logic in if
statements or use LINQ to build chains dynamically based on runtime input.
Use template injection. Store localized versions in resource files or a database and render based on user culture.
Render your final prompt using .Render(...)
, then pass it into the ChatCompletion request payload. Treat it like building a request body.
Yes. Decorate your chain with logging steps to see each transformation. You can even write a LoggingPromptChain
that wraps each function with timing.
Conclusion: Prompt Engineering is Software Engineering
Prompt engineering isn’t just about clever wording—it’s software design. Use templates to isolate changes. Chain steps for readability. And test your prompts like real code.
Try building your next AI interaction with a bit more structure. Your future self will thank you.
Want to see a real-world repo using these patterns? Let me know in the comments and I’ll share it.