CMS and Advanced Caching (In Nuxt)

I mentioned in my “Headless – Not Hovedløst” post that I wanted to follow up with a series of shorter posts.

It’s been months.
So let’s get to it, starting with caching.

A lone box

If you’re reading this, there’s a good chance I don’t need to convince you that we should “cache anything cacheable”. Still, to make sure no one gets left behind, let’s boil the motivation down to its absolute core.

We cache for two reasons:

  • Performance

  • Cost

That’s it.

Both of those contain a near-infinite number of sub-reasons, but they all collapse into those two words.

We want performance because it improves the visitor experience, and because Google, Amazon, and a long list of very smart people have told us it matters.

Amazon famously found that every additional 100 ms of page load time reduced sales by roughly 1%.
Google observed that adding 500 ms to the result page reduced traffic by almost 20%.

Guy going fast on a bikeWeeeeeee

But also, let’s be honest, going fast is fun.

Cost is a different beast. Being cheap isn’t inherently fun, but happy clients are. And very few things are cheaper than not executing code at all.

Serving content from Cloudflare or Azure Front Door is dramatically cheaper than load-balancing ten VMs or App Services across multiple regions.

Easy Caching

Using Cloudflare or Front Door is what I’d call easy caching.

You can almost just turn it on.

Of course, it comes with sacrifices, like everything else, and in a CMS context one important detail is often forgotten. Caching the output is great, but it’s only useful if it changes when content changes.

Editors hit publish.

Content gets updated.

Media is replaced.

And yet, many solutions respond to this by invalidating everything.

Why?

Because tracking what actually changed is difficult.

Winding road

Cache invalidation isn’t hard because the tools are bad. It’s hard because content relationships are complex. A single page might reference blocks, media, compositions, and related content that lives somewhere else entirely.

That’s why I built Headless Cache Keys

It’s a small Umbraco package that enriches Content Delivery API responses with a set of cache keys representing all content and media dependencies for a given response.

Those keys can then be used to selectively purge Cloudflare or Front Door, invalidating only what actually changed.

I run this on Kjeldsen.dev, partly to dogfood it, but also to make the behavior visible. If you inspect the HTML of the page, just before </body> you’ll find:

  • A timestamp for when the page was cached

  • All cache keys associated with that response

At a high level, the flow looks like this:

  • Register a NotificationHandler in Umbraco

  • React to publish events

  • Extract affected URLs via cache keys

  • Send a targeted purge request to your CDN

You can see a full example of this using Azure Front Door here:

Too long to include in blogpost :)

There is, however, a practical limitation.

Azure Front Door purges can take up to ten minutes to complete.
Cloudflare is faster, but still not instant.

That’s not a technical problem. It’s an editor expectation problem, and one you need to align on early.

Bunch of dials

Advanced Caching

Now let’s say you want invalidation NOW.

Maybe ten minutes is unacceptable.
Maybe you don’t want to cache your HTML responses at the CDN layer. You should still keep Cloudflare or Azure Front Door in front of your site, since they also give you TLS termination, WAF, rate limiting, and traffic control, not just caching.

Or maybe, and this is the interesting one, your content varies by visitor segment.

All good reasons.

This last case is why I’ve moved my caching layer closer to the application on Kjeldsen.dev.

I run Umbraco Engage there, mostly to experiment and understand how it behaves internally. But it introduces a challenge that CDN caching alone doesn’t solve well. Personalized responses.

Yes, this is technically solvable with Cloudflare Enterprise.
No, I don’t think “Contact Sales” pricing belongs in this blog post.

So what does “closer to the application” mean?

It means this:

  • Media, JS, CSS, and static assets are still served and cached by the CDN

  • HTML responses are rendered server-side

  • The cache lives inside the application

If a response is cached, no meaningful code runs.

Because I’m using Nuxt with a headless CMS, I rely on Nuxt Multi Cache, which provides route-level caching among other things. The specific package isn’t important. Any SSR framework can do this. Behavior is what’s important.

Instead of purging a CDN, I invalidate the application cache directly when Umbraco publishes content.

That invalidation is effectively instant.

Caching and Personalization

This is where things get tricky.

Let’s say you have a page that’s personalized.
To keep it simple, imagine a blog post that shows a picture of a dog or a cat depending on whether the visitor is on desktop or mobile.

Picture of a cat"Don't believe me? Open this link on your phone" - Mobile-kitty

It’s tempting to conclude that this page shouldn’t be cached.

And sometimes, that’s true.

If content is hyper-personalized, tied to a visitor ID, name, or individual behavior, caching can explode into thousands of variations and quickly become counterproductive.

But if content varies by segment, and you have a small, bounded number of segments, caching is still very much on the table.

The key is constructing the right cache key.

CDNs typically cache by URL:

a.b.c/my-page

Application caches don’t have that limitation.
They can include anything available at request time.

On Kjeldsen.dev, the cache key for a server-rendered page is constructed from:

  • The page URL

  • The value of the engageSegment cookie

That means a single page might generate two cache entries:

a.b.c/my-page::seg:desktop-user

a.b.c/my-page::seg:default

Reading a cookie during request handling is no more expensive than reading the URL itself. It’s already part of the request and adds no meaningful overhead.

A first-time visitor will naturally fall back to the default segment. If your setup requires immediate segmentation, for example based on IP, geography, or time of day, you can perform a one-off personalization pass on the initial request using middleware.

This still allows segmented content to be cached safely, without sacrificing correctness or performance.

Conclusion

Caching isn’t a switch you flip.
It’s a set of deliberate trade-offs.

CDN caching is cheap, fast, and incredibly effective. But the moment you care about when content changes, who it changes for, or how quickly those changes should be visible, you need more control.

The rule I tend to follow is this:

Cache as close to the user as possible,
but invalidate as close to the truth as necessary.

Sometimes that truth lives in a CDN purge.
Sometimes it lives inside your application.
Sometimes it depends on a cookie, a segment, or a publishing event that happened milliseconds ago.

What you should avoid is defaulting to “clear everything” because it feels simpler. That’s not simplicity. It’s giving up control, observability, and performance.

In the next post, I’ll stay in the same headless space but shift focus slightly, securing the Content Delivery API.

Because once you start caching aggressively and serving content fast, you also need to be very clear about who is allowed to fetch what, and why.

Caching may be the art of being strategically lazy.
Security is the art of being deliberately strict.