Introduction to Caching
Caching is a technique for temporarily storing frequently accessed or expensive-to-compute data to improve performance. It reduces database and server load while improving response times and user experience.
In-Memory Cache vs. Redis
An in-memory cache stores data in application memory and is cleared on restart or deployment. Unlike Redis, it does not persist across restarts but offers faster access due to RAM-based lookups and simpler setup.
Using the Otter Caching Library
The implementation uses the Otter Go library, a high-performance, type-safe generic cache. A custom generic cache wrapper is created with configurable size and time-to-live (TTL) settings.
Cache Abstraction Design
A generic cache struct is defined with helper methods: get, set, delete, and invalidateAll. This abstraction simplifies cache usage and enables easy invalidation when content changes.
Page-Level Caching
Two caches are introduced: one for rendered page components and another for string-based assets. The home page and article pages are cached using unique keys based on parameters such as pagination and slugs.
Asset Caching (Robots and Sitemap)
Static responses like robots.txt and sitemap.xml are cached to avoid repeated regeneration. Cached values are returned when available, otherwise generated and stored for future requests.
Performance and Extensibility Considerations
Cache size and TTL values are configurable and should be tuned based on memory constraints. The solution is extensible and can be enhanced with advanced options or replaced with persistent caching if needed.
Next Steps: Browser Caching
The next step introduces browser-level caching for static assets such as CSS and JavaScript, further reducing server requests and improving load times.
I want to add some caching to the block. And I want to use what is known as an in-memory cache. But before we get down what that is and how we do that, let's just quickly talk about what is caching. And caching is simply a way to temporarily store some data that we're going to be accessing frequently. And that can be expensive to acquire. Let's just say it like that. So you can imagine things like,
operations that we do a lot or things that has something to do with the database. For example, on our home page, we query all of our articles. This is very unlikely to be a bottleneck, but it is very easy to add this in memory cache, and it will make things go faster. So why are we going to add a cache? Well, the main reason you want to add a cache is to
improve performance, you get much faster load times. You reduce the load on the database or on the server and you also get a better user experience because pages loads faster. So it's not only pages, it can also be expensive queries. For example, let's say we have a complex model that requires a lot of different kind of queries to
to acquire the data we need to build up the model. In that case, it can be very beneficial to cache the resultive model, and then we can just retrieve it whenever we need it, which becomes very handy if we have something that doesn't change a lot, but can be, let's say, computational expensive to acquire. Now, we could use something like Redis, which is a key value store that will
persist these values between, and not between, but it will persist the data for the time that we set them to be valid for, or how long this should live. With an in-memory cache, whenever we deploy or restart our application, this goes away. So we will have to re-cache it.
You can definitely add Redis if you want to, but starting out with an in-memory cache is a really good starting point because it's very simple to do and it will improve performance. And then if you notice or you identify that you really need these things to be persistent across new deploys or reloads, then you can consider adding Redis. But it's very good to start as simple as possible and then build on top.
This is also faster than Redis because this is stored in memory, so it's a RAM lookup, which is very, very fast. We're going to be using a library called Otter, which is a high-performance Go caching library. It is type-safe with generic, so we can use this for pages. We can use this for models. We can use this for strings. We can pretty much do whatever we want as long as we have a Go data type.
Okay, so we're gonna start in our controller directly here, and we're gonna create a new file called cache.go in package controllers. Then we create a new struct called cache that has the generic parameter of t, that can be your type any. It's a struct, and this struct has a field called cache that is order cache
And then we specify the key value. So it's going to be a string for the key and then t for the value. And this is just whatever we pass as t. Then we can say funk new cache again t any. We will want to have the size and the time to live or time to live. I can't actually remember what it's short for.
So the duration that we're gonna have something live in the cache. This new function will return a pointer to cache t or an error. Then we can actually create the cache or get an error back and say order new. You will then pass order new order.options. Again, we specify the key value here to be string t. And we want to
Am I doing this correctly? Let me just see right here. Give this a save. Yes. And then we say maximum size is going to be size. Then we can pass this expiry calculator and use this order dot expiry writing. Again, we pass the key value and then the TTL. So how long should something be living in the cache form?
We return nil on error, if there's an error, and if not, simply just return the cache of type, generic type t and pass cache, like so, and nil. Then let's just add some convenience methods or some helper methods. We're gonna provide it with our method c cache t. I apologize if you can hear the dog barking in the background.
Right, we want a method here called get. We pass in the key and then we get back T and a boolean. So this is a very simple wrapper around the C, cache, get if present key. Then I want to create three more methods. I want to create a method for setting the value. So instead of get is set.
And we want to have value t. We return t of Boolean, and then we simply say set and value. I also want to have a delete method, so we can delete a cache if we need to, which is simply invalid date. Finally, I want to have an invalid date all that simply just clears the cache for all elements.
that will get, that will become handy whenever we, let's say we make a change to an article and we just want to make sure it's all clear. Then we can use this invalidate all method. And then on the next page load, as you've seen a second, we just fill up the cache with the articles. So this is our very simple cache. We forget, we have a set, we have a delete, and we have an invalidate all. Right.
Let's jump into the controller.gov file and start implementing these caches. I want to have two caches in this project. I want to have a page cache, who caches table, temple components, and also an assets cache that we can use to cache the robots response and the sitemap response. So let's say page cache, pointer to cache, and we pass the value, which is a temple,
component, and I need to spell cache correctly. And then we have the assets cache that simply has a value of string. Then in here, I'm going to say page cache error equals to new cache, simple component. Let's just say a hundred pages for now, and it's valid for 24 hours.
Now that we are returning an error, we of course also need to update our return value here. So we just return an empty controller struct if there's an error. These value are a little bit arbitrary. You could definitely optimize this. You could cache it for a longer time. You could make the cache smaller. This is in memory, so you can't just say that it can have a thousand or a million like this. You need to be aware of how big the cache is, but a hundred I think is a good starting point.
Now I'm just gonna be lazy and do the same with the assets cache that takes in string as a value. And again, a hundred elements is way too much, but it's fine. We cast this for 48 hours and then we simply pass in the two caches as dependencies and return nil. Page we're gonna cache is the home page because we have our request to the database that we can simply just
put in our cache and then retrieve it faster. So what we want to do is we want to have our cache key that is going to be pages, home, home, page, page. There we go. Then a dash. Then we say plus string conf e to a. And then we grab this page param that we grab from the kubic param. So this is going to be our cache key.
And then we're gonna say if component or exist and say see page cache.get. Then we will simply render out that value. So if it exists, we can grab the render here. And instead of returning views.home and then the data, we simply return the component.
If this is not true, then we continue on, and then we can cache the response down here. We only want the component, so see page cache. Page cache. Did I miss build this completely? All right, cache like this. Then we say sets, pass in the cache.
key, and then we want to pass in the component or the terminal components here, place this with components, and then pass it here. And now we will look if there's a cache, if it doesn't exist. We grab the data, we set it in the cache, and then we return the component. And the next time we will have this element in the cache, and we can just return the response. Next, we are just going to repeat all of this for the
article page so we go down to article we use the slug instead so we say slug obviously article and then slug then we pass the slug write pages yes and we can grab this one as well and say here and then grab component set the component and
ask the components in here. This is very quickly how we're going to do caching on our pages. Now, we can also do it on a load article script. If you want to, feel free to add this to the cache. But for now, only having the home and the article page is going to be modern enough. The next thing I want to cache is this. We have this robots down here, where we can
specify in the top here, say cache key, then say assets and robots. And again, you just say if assets exist, we look this up in the assets, cache on a cache key, say exists, then we simply return assets.
And let's just call this asset. And again, if we do not have an value inside the cache that matches the key, we want to set it. So in here, we're going to say, let's call this assets and assets cache. And we want to set the string row right. Then we turn the asset.
Right. Final element is we have the sitemap. We want to say cache key equals assets sitemap. And then let's just grab it like I did here. And we return our blob here, where we want to return
plus assets. Then down here, if we do not have an element, we will use the strings and then XML bytes. Turn the asset here. No, we need to do this.
So now we don't have to rebuild the sitemap. We also don't have to rebuild the robots. Right, this is how we do caching, or at least how we're gonna do caching for this very simple use case, but this is very extendable. Other has a bunch of options that you can explore if you wanna extend the cache. You can optimize it and do a lot of things. In the next episode, we will touch upon
browser caching of static assets. And that is another way to cache assets that we don't want to have to refit all the time. So this is a way to save our CSS and our JavaScript that, hey, please put this in the browser. So the browser don't have to talk to the server to get those elements down. And they can just live in the user's browser. And that will also make loading the pages a lot faster.
We just have one little thing we need to do that is in appmain.go. We now have an error here. So we just need to check for that. And we are ready to deal with the next step, which is browser caching of static assets.