Still chaining GroupBy → Select → Count just to tally items? Or building temporary lists just to compute sums and averages? With .NET 9 you can cut that noise down to a single call and keep the intent crystal clear.
I first met CountBy and AggregateBy while trimming a messy analytics pipeline: dozens of GroupBy’s, custom projections, and accidental extra allocations. After swapping a few hot spots, the code got shorter, easier to read, and a bit leaner on memory. In this post I’ll show you how to get the same win in your codebase.
What’s new and where it lives
- Available: .NET 9 (C# 13), namespace: 
System.Linq. - Goal: do common grouping tasks without building intermediate group collections.
 - Return type: both methods yield 
IEnumerable<KeyValuePair<TKey, TValue>>so you canforeachor turn them into aDictionary. 
Quick mental model:
CountByis a built‑in histogram by key.AggregateByis “Aggregate, but per key”.
CountBy in 60 seconds
Signature (simplified):
IEnumerable<KeyValuePair<TKey,int>> CountBy<TSource,TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IEqualityComparer<TKey>? keyComparer = null)Why you’ll use it
- Count items by a property in one pass.
 - No 
GroupBy(...).Select(g => new { g.Key, Count = g.Count() })boilerplate. - Optional comparer for case‑insensitive keys.
 
Basic example: API logs by status code
var counts = logs
    .CountBy(x => x.StatusCode)
    .OrderBy(kv => kv.Key); // kv.Key = status, kv.Value = count
foreach (var (code, hits) in counts)
    Console.WriteLine($"{code}: {hits}");With a custom comparer
var byCategory = products.CountBy(p => p.Category, StringComparer.OrdinalIgnoreCase);Counting tags across posts (flatten first, then CountBy)
var tagCounts = posts
    .SelectMany(p => p.Tags)
    .CountBy(tag => tag)
    .OrderByDescending(kv => kv.Value);Turn into a dictionary when you need random access
var lookup = tagCounts.ToDictionary(kv => kv.Key, kv => kv.Value);AggregateBy in 60 seconds
AggregateBy lets you fold values per key without materializing groups.
Overloads (simplified):
IEnumerable<KeyValuePair<TKey,TAcc>> AggregateBy<TSource,TKey,TAcc>(
    this IEnumerable<TSource> source,
    Func<TSource,TKey> keySelector,
    TAcc seed,
    Func<TAcc,TSource,TAcc> update,
    IEqualityComparer<TKey>? keyComparer = null)
IEnumerable<KeyValuePair<TKey,TAcc>> AggregateBy<TSource,TKey,TAcc>(
    this IEnumerable<TSource> source,
    Func<TSource,TKey> keySelector,
    Func<TKey,TAcc> seedSelector, // seed can depend on the key
    Func<TAcc,TSource,TAcc> update,
    IEqualityComparer<TKey>? keyComparer = null)Summing totals by key
var revenueByCategory = orders.AggregateBy(
    keySelector: o => o.Category,
    seed: 0m,
    update: (sum, o) => sum + o.Price * o.Quantity);Average in a single pass (sum + count accumulator)
public readonly record struct Avg(decimal Sum, int Count)
{
    public decimal Value => Count == 0 ? 0 : Sum / Count;
}
var avgTicketByUser = orders.AggregateBy(
    keySelector: o => o.UserId,
    seed: new Avg(0, 0),
    update: (acc, o) => new Avg(acc.Sum + o.Total, acc.Count + 1))
  .Select(kv => new { kv.Key, Average = kv.Value.Value });Min / Max in one pass
public readonly record struct MinMax(decimal Min, decimal Max)
{
    public static MinMax Empty => new(decimal.MaxValue, decimal.MinValue);
    public MinMax Push(decimal value) => new(
        Min: value < Min ? value : Min,
        Max: value > Max ? value : Max);
}
var priceRangeByBrand = products.AggregateBy(
    keySelector: p => p.Brand,
    seed: MinMax.Empty,
    update: (acc, p) => acc.Push(p.Price));Top‑N per key (simple version)
var top2ByCustomer = orders.AggregateBy(
    keySelector: o => o.CustomerId,
    seed: new List<Order>(capacity: 2),
    update: (list, o) => {
        list.Add(o);
        list.Sort((a,b) => b.Total.CompareTo(a.Total));
        if (list.Count > 2) list.RemoveAt(2);
        return list;
    });Tip: For heavy Top‑N use, prefer a tiny heap structure over
List.Sortto keep updates cheap.
Real‑world slices
1) Operations dashboard: SLA breaches by service
In one project we needed a quick histogram of SLA breaches per service to power a dashboard widget.
var breachesByService = incidents
    .Where(i => i.SlaBreached)
    .CountBy(i => i.ServiceName)
    .OrderByDescending(kv => kv.Value)
    .Take(10);No groups, no custom DTO. The intent reads like English: “count breaches by service.”
2) E‑commerce: basket totals, average order, and range per category
var totals = orders.AggregateBy(
    keySelector: o => o.Category,
    seed: 0m,
    update: (sum, o) => sum + o.Price * o.Quantity);
var average = orders.AggregateBy(
    keySelector: o => o.Category,
    seed: new Avg(0,0),
    update: (acc, o) => new Avg(acc.Sum + o.Total, acc.Count + 1))
  .Select(kv => new { kv.Key, Average = kv.Value.Value });
var ranges = orders.AggregateBy(
    keySelector: o => o.Category,
    seed: MinMax.Empty,
    update: (acc, o) => acc.Push(o.Price));3) Security: most frequent auth failures per tenant
var topErrorsByTenant = failures
    .Where(f => f.TenantId != null)
    .GroupBy(f => new { f.TenantId, f.Reason }) // reason text is the key inside tenant
    .Select(g => new { g.Key.TenantId, g.Key.Reason, Count = g.Count() })
    .GroupBy(x => x.TenantId)
    .Select(g => new {
        TenantId = g.Key,
        Top3 = g.OrderByDescending(x => x.Count).Take(3).ToList()
    });Rewrite with CountBy and a small AggregateBy for the top‑3:
var topErrorsByTenant2 = failures
    .CountBy(f => new { f.TenantId, f.Reason })
    .GroupBy(kv => kv.Key.TenantId)
    .AggregateBy(
        keySelector: g => g.Key,
        seed: new List<(string Reason,int Count)>(3),
        update: (list, g) => {
            foreach (var kv in g) {
                list.Add((kv.Key.Reason, kv.Value));
                list.Sort((a,b) => b.Count.CompareTo(a.Count));
                if (list.Count > 3) list.RemoveAt(3);
            }
            return list;
        });Why not just GroupBy?
GroupBy is great when you need the items in each group. But many tasks only need a number, a sum, a min/max pair, or a small sketch value. In those cases GroupBy:
- Builds a 
Grouping<TKey,TElement>with a backing collection per key. - Holds on to every element until you finish aggregating.
 - Encourages extra projections just to expose 
Keyand your computed value. 
CountBy and AggregateBy cut straight to the result value per key. That usually means fewer allocations and clearer intent. Always bench your hot paths, but as a rule of thumb I reach for these first when the final result per key is a scalar or a tiny struct.
Mental check: “Do I need the items themselves or only a number/summary per key?” If it’s the latter, you probably want
CountByorAggregateBy.
Async streams
Processing an event stream? There is an AsyncEnumerable.CountBy for IAsyncEnumerable<T> with sync and async key selectors. For custom per‑key folding over async streams, use AggregateAsync over a buffered projection, or batch events and fold them with AggregateBy when they hit memory. A simple example with CountBy:
await foreach (var kv in events
    .Where(e => e.Kind == EventKind.Warning)
    .CountBy(e => e.Source))
{
    Console.WriteLine($"{kv.Key}: {kv.Value}");
}Note: translation support in ORMs varies. If your query runs on a provider (EF Core, Mongo LINQ, etc.), verify how
CountBybehaves. If it can’t translate, bring data to memory (AsEnumerable) or stick to the provider’s native constructs.
API cheatsheet (copy/paste friendly)
CountBy
// Count orders by country
var byCountry = orders.CountBy(o => o.ShippingCountry);
// Case‑insensitive keys
var byCity = orders.CountBy(o => o.City, StringComparer.OrdinalIgnoreCase);
// To dictionary
var map = byCountry.ToDictionary(kv => kv.Key, kv => kv.Value);AggregateBy
// Sum by key
var sumByDay = points.AggregateBy(p => p.Day, 0, (sum, p) => sum + p.Value);
// Average by key
var avgByDay = points.AggregateBy(
    keySelector: p => p.Day,
    seed: new Avg(0,0),
    update: (acc, p) => new Avg(acc.Sum + p.Value, acc.Count + 1))
  .Select(kv => new { Day = kv.Key, Average = kv.Value.Value });
// Min/Max by key
var rangeByDay = points.AggregateBy(
    keySelector: p => p.Day,
    seed: MinMax.Empty,
    update: (acc, p) => acc.Push(p.Value));Subtleties you should know
- Return order: the result is just an 
IEnumerable<KeyValuePair<...>>. If you need ordering, callOrderBy/ThenBy. - Comparer matters: pass the 
IEqualityComparer<TKey>you need (case, culture, custom hash). Default is the default equality forTKey. - Seed vs seed selector: choose 
seedSelectorwhen the initial state depends on the key (e.g., pre‑size a list by key, preload thresholds per tenant). - Value objects rock: use tiny 
record structaccumulators to keep state (sum, count, min, max). They’re compact and easy to reason about. - One pass: both methods stream the source once. If your source is a list that you iterate elsewhere too, that still means multiple passes in your code, not inside the operator.
 - Null keys: allowed for reference/key types-handle them if your domain allows 
null. - Lifetime: the operators don’t keep references to the source after enumeration completes.
 
From GroupBy to *By: migration guide
Before:
var counts = orders
    .GroupBy(o => o.Country)
    .Select(g => new { Key = g.Key, Count = g.Count() });After:
var counts = orders.CountBy(o => o.Country);Before:
var totals = orders
    .GroupBy(o => o.Category)
    .Select(g => new { g.Key, Total = g.Sum(o => o.Total) });After:
var totals = orders.AggregateBy(
    keySelector: o => o.Category,
    seed: 0m,
    update: (sum, o) => sum + o.Total);Before:
var minMax = orders
    .GroupBy(o => o.Store)
    .Select(g => new {
        g.Key,
        Min = g.Min(o => o.Total),
        Max = g.Max(o => o.Total)
    });After:
var minMax = orders.AggregateBy(
    keySelector: o => o.Store,
    seed: MinMax.Empty,
    update: (acc, o) => acc.Push(o.Total));Benchmark harness you can run
Use this to compare
GroupByvsCountBy/AggregateByon your data shape. I keep a variant of this in test projects.
// <PackageReference Include="BenchmarkDotNet" Version="0.14.0" />
[MemoryDiagnoser]
public class GroupingBench
{
    private readonly Order[] _orders;
    public GroupingBench()
    {
        var rand = new Random(42);
        _orders = Enumerable.Range(0, 50_000)
            .Select(i => new Order
            {
                Id = i,
                Category = i % 16 == 0 ? "Promo" : ($"C{i % 8}"),
                Price = (decimal)rand.NextDouble() * 100,
                Quantity = rand.Next(1, 5)
            })
            .ToArray();
    }
    [Benchmark]
    public Dictionary<string,int> GroupByCount() =>
        _orders.GroupBy(o => o.Category)
               .ToDictionary(g => g.Key, g => g.Count());
    [Benchmark]
    public Dictionary<string,int> CountBy() =>
        _orders.CountBy(o => o.Category)
               .ToDictionary(kv => kv.Key, kv => kv.Value);
    [Benchmark]
    public Dictionary<string,decimal> GroupBySum() =>
        _orders.GroupBy(o => o.Category)
               .ToDictionary(g => g.Key, g => g.Sum(o => o.Price * o.Quantity));
    [Benchmark]
    public Dictionary<string,decimal> AggregateBySum() =>
        _orders.AggregateBy(o => o.Category, 0m, (sum, o) => sum + o.Price * o.Quantity)
               .ToDictionary(kv => kv.Key, kv => kv.Value);
}
public sealed class Order
{
    public int Id { get; set; }
    public string Category { get; set; } = "";
    public decimal Price { get; set; }
    public int Quantity { get; set; }
    public decimal Total => Price * Quantity;
}Run it and check both allocations and mean time for your dataset. On workloads where you only need a scalar per key, CountBy/AggregateBy typically reduce temporary allocations.
FAQ: quick answers about CountBy & AggregateBy
They don’t guarantee any specific order. Sort the result if you need one.
IEnumerable<KeyValuePair<TKey,TValue>>. Convert with ToDictionary when needed.
Check your provider. When unsupported, the query may run on the client after AsEnumerable(). For pure LINQ‑to‑Objects they shine.
Pass a comparer: CountBy(x => x.Tag, StringComparer.OrdinalIgnoreCase).
Use an accumulator with sum and count (see examples above).
There is AsyncEnumerable.CountBy for IAsyncEnumerable<T>. For custom folds over async streams use AggregateAsync or buffer/batch then AggregateBy.
Same as other LINQ operators over in‑memory collections: safe to use from multiple threads on independent sequences; not safe to mutate the source while enumerating.
GroupBy keeps all group items; CountBy/AggregateBy keep only the running state per key, which is typically smaller.
Conclusion: clearer code and fewer moving parts
If your goal is a count, a sum, an average, or another compact summary per key, reach for CountBy and AggregateBy first. You’ll write less code, your intent will be obvious to the next reader, and you avoid building groups you don’t need. Try replacing a single GroupBy chain in your project today and see how small the diff is-then share your result in the comments.
