Are you wondering how to talk to an LLM from your .NET app without wrestling with brittle HTTP calls or rolling your own prompt‑orchestration layer? You’re not alone – I fought that dragon too until I discovered Semantic Kernel (SK) and life suddenly felt like debugging with IntelliSense again instead of Notepad.
In the next ten minutes you’ll install SK on .NET 8, run your first “Hello AI” prompt, and learn the common pitfalls that bite new adopters – so you can skip the pain and go straight to building cool Gen AI features.
Why Semantic Kernel Matters in the Gen AI Stack
When ChatGPT exploded onto the scene, many teams (mine included) duct‑taped OpenAI calls into their apps. It worked, but the code was fragile, state management was messy, and iterating on prompts felt like copy‑pasting SQL in 2005. SK solves that by giving you:
- A programmable kernel that abstracts prompt, memory, and planning concerns.
- Plugin model to mix and match skills (both native C# methods and prompt templates).
- Pipeline flexibility – swap OpenAI with Azure OpenAI, OSS models, or your own fine‑tunes by flipping configuration, not code.
Think of SK as the Entity Framework for LLMs: you focus on logic, it handles the plumbing.
NuGet vs Source‑Build: Which One & When?
- NuGet – ideal for 99 % of projects. You pull the latest Microsoft.SemanticKernel package and stay productive.
- Source‑build – handy when you need to debug SK internals, submit a PR, or ride the bleeding edge before preview packages hit NuGet. Clone the repo, run
dotnet build -p:Configuration=Release
, and reference the local output.
If you’re evaluating SK, start with NuGet; you can switch later in under five minutes.
Prerequisites
Requirement | Minimum Version | Notes |
---|---|---|
.NET SDK | 8.0.100 | Preview SDKs work but may require global.json . |
OS | Windows 10 21H2 / macOS 12 / Ubuntu 22.04 | Anything that runs .NET 8 CLI. |
API Key | OpenAI or Azure OpenAI | We’ll use GPT‑3.5‑Turbo in examples. |
Networking | HTTPS 443 | Corporate proxies: see troubleshooting. |
Heads‑up: If you’re behind a proxy, set
HTTP_PROXY
andHTTPS_PROXY
env vars before running the sample or you’ll meet a friendlySocketException
.
Installing the NuGet Package
Choose the Right Package
- Microsoft.SemanticKernel – meta‑package that lights up all core features. Perfect for quick starts.
- Microsoft.SemanticKernel.Connectors.OpenAI – pulls in only the OpenAI connector. Use when you need a slimmer dependency tree.
- Microsoft.SemanticKernel.Connectors.Qdrant / Memory.AzureCosmosDB / Skills.Excel – pick ’n’ mix à la carte.
For this tutorial we’ll grab the meta‑package:
# From an empty folder or existing solution
dotnet new console -n HelloSK
cd HelloSK
dotnet add package Microsoft.SemanticKernel --version 1.2.0
Preview train arriving! SK ships weekly preview builds tagged
-preview.*
. If you need a fix that merged yesterday, add--prerelease
to the command above or pin an explicit suffix, e.g.1.3.0-preview.24308.2
.
Restore & Verify
dotnet restore
You should see Microsoft.SemanticKernel
and its friends in obj/project.assets.json. If nuget.org
is blocked, configure an internal mirror via NuGet.Config.
Your First Kernel in 30 Lines
Let’s write the tiniest usable kernel that says hi.
// Program.cs
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
var openAiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY")
?? throw new InvalidOperationException("Set OPENAI_API_KEY env var");
IKernel kernel = new KernelBuilder()
.AddOpenAIChatCompletion(
modelId: "gpt-3.5-turbo",
apiKey: openAiKey)
.Build();
var result = await kernel.InvokePromptAsync("Write a cheerful greeting from an AI assistant.");
Console.WriteLine(result.GetValue<string>());
Run it:
dotnet run
And voilà – GPT replies:
🌞 Hello, human! I’m thrilled to help you build amazing things today. Let’s get started!
What just happened?
KernelBuilder
wires dependencies (logging, memory, skills) behind the scenes.AddOpenAIChatCompletion
registers the GPT 3.5 chat endpoint.InvokePromptAsync
sends the string, SK wraps it in a system template, handles retries, and yields a typed result.
Under the Hood: Prompt Flow
- System prompt → 2. User prompt → 3. Kernel pipeline → 4. LLM → 5. Post‑processing / memories / skills.
This abstraction lets you bolt on vector storage, function calling, or agentic planning without rewriting your call sites.
Troubleshooting Cheat Sheet
Symptom | Likely Cause | Fix |
---|---|---|
System.IO.FileNotFoundException: Microsoft.SemanticKernel.*.dll | Mixed stable/preview packages | Align versions or delete bin/obj. |
TypeLoadException after upgrade | Runtime still on SK v1 schema | Re‑run dotnet clean & rebuild. |
SocketException: Connection timed out | Corporate proxy / ZScaler | Export HTTPS_PROXY=http://proxy:8080 before run. |
OpenAI InvalidRequestError: model not found | Wrong model name or unsubscribed region | Use gpt-3.5-turbo or check Azure model deployment name. |
error NETSDK1045: The current .NET SDK does not support .NET 8.0. | Multiple SDKs installed | Pin SDK in global.json: {"sdk": { "version": "8.0.100"}} |
Tip: Run
dotnet --list-sdks
– if you see preview builds older than May 2025, they sometimes hijack the tooling path. Uninstall or reorder the PATH entries.
FAQ: Semantic Kernel First‑Steps
No – you can use either. The same AddOpenAIChatCompletion
extension has overloads for Azure (endpoint
, deployment
) and non‑Azure (modelId
). Swap at runtime via configuration.
Add an explicit <PackageReference>
with Version="1.2.0"
in your csproj. In GitHub Actions, cache the packages folder keyed by the hash of your nuget.lock.json to guarantee reproducible builds.
Yes – starting with v1.0 (April 2025) the API surface is SemVer‑stable. New features land behind optional packages so you don’t break on minor updates.
Absolutely, but remember browsers can’t keep your API key secret. Proxy requests through a secure backend or inject a serverless function for the outbound call.
Conclusion: You’re Talking to an LLM from .NET in Under 10 Minutes
Today you:
- Installed Semantic Kernel via NuGet on .NET 8.
- Built a minimal
KernelBuilder
and sent a prompt. - Learned the landmines (SDK mismatches, proxies, preview builds) that derail first‑timers.
Go ahead – fork this sample, add a memory store, maybe wire up function calling, and you’ll be three commits away from a conversational copilots feature in your product.
What’s next? In Part 2 we’ll dive into SK’s core abstractions – Functions, Memories, and Planners – and see how they compose into full‑blown AI agents.
Have you tried SK already? Which connector or skill are you most excited about? Let me know in the comments!