Are you sure your Blazor app is as fast as you think? In one of my projects, a harmless-looking <Select>
with 5k options quietly ate 40% of render time and triggered hundreds of unnecessary diffs on every keystroke. The fix took 10 minutes. In this guide I’ll show you practical, copy‑paste‑ready techniques I use to make Blazor apps feel instant – from first paint to snappy interactions – without rewriting your codebase.
Why it matters
- User retention & satisfaction: Faster first paint and input response keep people from bouncing; sluggish forms and lists kill adoption.
- Cost & scalability: In Blazor Server, every unnecessary rerender and event eats per-circuit CPU/RAM. Optimized diffs = more concurrent users on the same hardware.
- Startup time on real devices: Smaller payloads (IL/assemblies, assets) dramatically improve cold start on mobile/low-bandwidth networks.
- WASM efficiency: Less IL to download/JIT, targeted AOT only where it pays, and smarter caching mean smoother interactions after load.
- Core Web Vitals & SEO: SSR + streaming improves LCP/INP and delivers shareable, indexable HTML, helping marketing pages and app shells alike.
- Stability & maintainability: Clean JS interop (cache/dispose) prevents leaks and circuit drops. Throttled inputs and virtualization stop render storms.
- Accessibility & inclusivity: Reducing UI work benefits screen readers and low-end hardware just as much as high-end desktops.
Goal: ship fewer bytes, do less work per render, and keep TTI low across Server, WASM, and SSR.
Choose the right hosting/interaction model
Blazor now supports multiple ways to run UI logic. The model influences latency, memory, and payload size.
- Blazor Server
- Pros: tiny download, instant first paint, works on old devices; keeps .NET on server.
- Cons: depends on SignalR round-trips; per-user server memory; intermittent networks hurt UX.
- Best for: internal apps, low-latency networks, heavy data access.
- Blazor WebAssembly (WASM)
- Pros: runs fully client-side; offline; scales cheaply (static hosting/CDN).
- Cons: initial payload size; CPU-bound on weak devices.
- Best for: public apps, offline/edge scenarios.
- Blazor SSR + Interactive (aka the .NET 8 “unified” story)
- Server-side prerenders HTML for fast first paint, then selectively enables interactivity via:
InteractiveServer
(SignalR circuit),InteractiveWebAssembly
(client-side), orInteractiveAuto
(best-effort auto choice).
- You can mix per component! Great for progressive enhancement.
- Server-side prerenders HTML for fast first paint, then selectively enables interactivity via:
Rule of thumb:
- If your users are mostly on corporate networks: start with Server or SSR + InteractiveServer.
- If global audience + CDN: WASM or SSR + InteractiveWasm with AOT for hotspots.
Example – pick per component
@* Renders fast via SSR; turns interactive on the client if possible *@
<Dashboard />
@code {
// In .NET 8+, set in the component file too:
// @rendermode InteractiveAuto
}
Measure first: easy instrumentation that pays off
Before optimizing, add cheap telemetry. Two helpers I drop into new apps:
Render stopwatch for components
public sealed class RenderTimer : IDisposable
{
private readonly string _name;
private readonly Stopwatch _sw = Stopwatch.StartNew();
public RenderTimer(string name) => _name = name;
public void Dispose()
{
_sw.Stop();
Console.WriteLine($"[Render] {_name} took {_sw.ElapsedMilliseconds} ms");
}
}
Use in a component:
@implements IDisposable
@code {
private RenderTimer? _rt;
protected override void OnParametersSet()
=> _rt = new RenderTimer(GetType().Name);
public void Dispose() => _rt?.Dispose();
}
Now you’ll see real numbers for expensive components.
Quick network timing
Wrap HttpClient
to log URL, bytes, and duration:
public class HttpLoggingHandler : DelegatingHandler
{
public HttpLoggingHandler(HttpMessageHandler inner) : base(inner) {}
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage r, CancellationToken ct)
{
var sw = Stopwatch.StartNew();
var res = await base.SendAsync(r, ct);
sw.Stop();
Console.WriteLine($"HTTP {r.Method} {r.RequestUri} -> {(int)res.StatusCode} in {sw.ElapsedMilliseconds} ms; bytes: {res.Content.Headers.ContentLength}");
return res;
}
}
Register once and you’ll catch slow endpoints and bloated payloads.
Understand the rendering pipeline (and outsmart it)
Rendering is diff-based. Avoid making the diff engine work when nothing changed.
Use ShouldRender
strategically
@code {
protected override bool ShouldRender()
=> _dirty; // only rerender after data or filter change
private bool _dirty;
void OnFilterChanged(...) { _dirty = true; StateHasChanged(); }
}
Always @key
repeaters and dynamic fragments
@key
helps Blazor reuse elements instead of tearing down and recreating.
@foreach (var item in Items)
{
<Row @key="item.Id" Model="item" />
}
Avoid heavy work during render
Never query DB/HTTP or heavy LINQ in BuildRenderTree
or computed properties used in markup. Materialize data before render.
@code {
private IReadOnlyList<Customer> _view = Array.Empty<Customer>();
protected override async Task OnParametersSetAsync()
{
var data = await _client.GetCustomersAsync();
_view = ApplyFilters(data); // precompute
}
}
Throttle chatty events (input)
<InputText @bind-Value="Search" oninput="@(e => DebouncedInput(e.Value?.ToString()))" />
@code {
private readonly TimeSpan _debounce = TimeSpan.FromMilliseconds(200);
private CancellationTokenSource? _cts;
private void DebouncedInput(string? v)
{
_cts?.Cancel();
var cts = _cts = new();
_ = Task.Delay(_debounce, cts.Token).ContinueWith(_ =>
{
if (!cts.IsCancellationRequested)
{
Search = v;
InvokeAsync(StateHasChanged);
}
});
}
}
Virtualize everything that lists
Rendering thousands of DOM nodes is slow. Use built-in virtualization.
<Virtualize Items="Customers" ItemSize="38" OverscanCount="3" Context="c">
<Row Model="c" />
</Virtualize>
Tips:
- Provide
ItemSize
(px) for accurate calculations. - Keep row components pure (no network calls inside each row).
- For large tables, precompute narrow view models with only visible columns.
If you need grouping/pinning, virtualize at the outermost scroller only to avoid nested reflows.
JS interop without tears (and leaks)
Interop is powerful, but repeated module loads and object pins can be costly.
Cache JS modules
[Inject] IJSRuntime JS { get; set; } = default!;
IJSObjectReference? _mod;
protected override async Task OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
_mod = await JS.InvokeAsync<IJSObjectReference>("import", "./js/grid.js");
}
public async ValueTask DisposeAsync() => await _mod?.DisposeAsync()!; // important!
Avoid excessive JSON serialization
- Pass primitive parameters to JS where possible.
- For large data, consider passing an ID and retrieving details from a shared JS cache.
Don’t leak DotNetObjectReference
var handle = DotNetObjectReference.Create(this);
try
{
await _mod.InvokeVoidAsync("register", handle);
}
finally
{
handle.Dispose();
}
When to use unmarshalled calls
If you must micro-optimize hot loops in WASM, IJSUnmarshalledRuntime
can avoid JSON overhead, but it’s niche – use sparingly and behind an interface.
Data fetching that respects the UI
Cancel stale requests
Users type fast; don’t update UI with old responses.
private CancellationTokenSource _loadCts = new();
async Task LoadAsync()
{
_loadCts.Cancel();
_loadCts = new();
var data = await _client.GetAsync(_loadCts.Token);
_items = data; StateHasChanged();
}
Stream results into the page
With IAsyncEnumerable<T>
you can render progressively (pairs nicely with SSR streaming).
await foreach (var chunk in _service.SearchAsync(query))
{
_buffer.AddRange(chunk);
StateHasChanged();
}
Cache aggressively at the edge
- Enable HTTP caching headers for static assets and lookup lists.
- Memoize expensive computed projections in memory (size-bound).
Cut payload size (the cheapest perf win)
Trim and link
For WASM projects, enable trimming and relinking for small payloads.
<!-- in .csproj -->
<PropertyGroup>
<PublishTrimmed>true</PublishTrimmed>
<InvariantGlobalization>true</InvariantGlobalization>
<BlazorEnableTimeZoneSupport>false</BlazorEnableTimeZoneSupport>
</PropertyGroup>
Lazy-load assemblies
Split feature areas into lazy assemblies so first load stays tiny.
<ItemGroup>
<BlazorWebAssemblyLazyLoad Include="Awesome.Feature.dll" />
</ItemGroup>
Load when needed:
await assemblyLoader.LoadAssembliesAsync(new[] { "Awesome.Feature.dll" });
Static compression + CDN
- Serve Brotli (and Gzip fallback).
- Put
_framework
and images behind a CDN with long cache TTL and content hashing.
AOT (selectively)
- AOT dramatically speeds hot compute paths in WASM but increases size.
- Consider partial AOT (compile critical assemblies AOT, others IL) when build times/size matter.
SSR & interactive rendering tricks (.NET 8)
Stream the shell, hydrate later
- Use SSR to send meaningful HTML quickly.
- Then enable interactivity per component via
@rendermode
.
@rendermode InteractiveAuto
<Hero />
<Filters @rendermode="InteractiveServer" />
<Results @rendermode="InteractiveWebAssembly" />
Defer expensive islands
Render placeholders on the server; wake up components on idle or visibility.
@if (!isVisible)
{
<Skeleton height="320" />
}
else
{
<Chart @rendermode="InteractiveWebAssembly" Data="data" />
}
Combine with an IntersectionObserver
via a tiny JS interop to flip isVisible
.
Forms that don’t lag
- Prefer
onchange
overoninput
for heavy validation. - Debounce inputs.
- Avoid
<Select>
with enormous datasets—use typeahead with virtualization.
<Virtualize Items="FilteredOptions" Context="opt" ItemSize="32">
<div @onclick="() => Select(opt)">@opt.Text</div>
</Virtualize>
Batching UI updates
When background data arrives in bursts, batch updates to avoid re-render storms.
private readonly Channel<Action> _ui = Channel.CreateUnbounded<Action>();
protected override void OnInitialized()
{
_ = Task.Run(async () =>
{
var buffer = new List<Action>();
var timer = new PeriodicTimer(TimeSpan.FromMilliseconds(50));
await foreach (var action in _ui.Reader.ReadAllAsync())
{
buffer.Add(action);
if (buffer.Count > 100 || await timer.WaitForNextTickAsync())
{
var copy = buffer.ToArray(); buffer.Clear();
await InvokeAsync(() => { foreach (var a in copy) a(); StateHasChanged(); });
}
}
});
}
void ApplyUpdate(Action a) => _ui.Writer.TryWrite(a);
This pattern keeps the UI smooth under high-frequency updates.
Memory discipline (Server & WASM)
- Dispose of
IAsyncDisposable
modules and timers; unhook event handlers inDispose
. - Be careful with large lists in component fields; prefer immutable snapshots.
- In Server, watch allocations per circuit; test with multiple concurrent users.
Minimal base for safe cleanup:
@implements IAsyncDisposable
@code {
private Timer? _timer;
public async ValueTask DisposeAsync()
{
_timer?.Dispose();
if (_mod is IAsyncDisposable d) await d.DisposeAsync();
}
}
Build & runtime switches that matter
- Release builds with
-c Release
(always!) - Turn on ResponseCompression for Server/SSR.
- Enable HTTP/2 or HTTP/3 where possible.
- For WASM: test AOT vs IL to balance startup vs CPU.
Example middleware:
builder.Services.AddResponseCompression(o =>
{
o.EnableForHttps = true;
o.Providers.Add<BrotliCompressionProvider>();
});
var app = builder.Build();
app.UseResponseCompression();
Diagnostics you’ll actually use this week
- Browser DevTools: coverage (unused JS/CSS), performance flamecharts.
- dotnet-counters (Server) to watch CPU, GC, and SignalR.
- dotnet-trace for deeper sampling when a page hitches.
- Application Insights/OpenTelemetry: log slow pages and endpoints with correlation IDs.
Simple perf logger:
public static class Perf
{
public static async Task<T> Time<T>(string name, Func<Task<T>> work)
{
var sw = Stopwatch.StartNew();
try { return await work(); }
finally { Console.WriteLine($"[Perf] {name} {sw.ElapsedMilliseconds} ms"); }
}
}
Common anti-patterns (seen in real code)
- Doing heavy LINQ/regex in property getters referenced by markup.
- Recreating
HttpClient
per request instead of DI. - Mutating lists bound to UI without
@key
(causes item churn). - Large
RenderFragment
trees generated on every keystroke. - Not disposing
IJSObjectReference
andDotNetObjectReference
. - Blocking
Task
with.Result
/.Wait()
inside event handlers (deadlocks & thread starvation).
Fix the top 3 and you’ll feel the difference immediately.
A pragmatic checklist (print this!)
Startup & build
- Release build, trimming on (WASM), Brotli on, CDN for static.
- Choose SSR + selective interactivity for fast first paint.
Rendering
@key
for lists;ShouldRender
where appropriate.- Virtualize long lists; throttle
oninput
. - Precompute view models, avoid heavy work in render.
Interop & memory
- Cache JS modules; dispose references.
- Avoid large JSON payloads; pass IDs.
Data
- Cancel stale requests; stream results; cache lookups.
Diagnostics
- Add render timers; HTTP timings; watch counters.
Tape this list next to your monitor. Seriously.
FAQ: Quick answers to hot questions
Turn on trimming, lazy-load assemblies, serve Brotli via CDN, and consider SSR to stream the initial HTML.
It depends. Server often feels faster initially (no large download), but interactive latency depends on network. WASM has no round-trips; once loaded, complex client logic is very responsive.
Use AOT when you have compute-heavy client code (crypto, parsing, math). For typical forms/CRUD, IL with trimming is often good enough.
You’re probably filtering a large list on every keypress and re-rendering thousands of rows. Debounce input, virtualize, and precompute filtered subsets.
Add a render timer per component and log call counts. Watch for frequent StateHasChanged
in event handlers and background tasks.
<Select>
with 10k items freezes – what now? Don’t. Use a virtualized typeahead (async search + Virtualize
). Load options page-by-page.
For many small calls, yes (binary framing, multiplexing). But measure; network and server implementation matter more than protocol labels.
Favor SSR for first paint, compress hard, minimize JS, and avoid heavy animations. Test on low-end devices using CPU throttling.
Conclusion: Your Blazor can be instant with the right moves
You don’t need a rewrite to get dramatic wins. Pick the right render mode, ship fewer bytes, avoid pointless diffs, and cache/dispose interop properly. Start with the checklist, then instrument and iterate. In my projects, these steps cut TTI by 30–70% with simple, reversible changes.
Which component hurts the most in your app—lists, forms, or charts? Drop a comment with your worst offender and I’ll suggest a targeted fix.