Deployment Preparation and Optimization Goals
The application is nearly ready for deployment, with remaining tasks focused on SEO optimization, page caching, browser caching, and error handling. The current session concentrates on foundational SEO improvements before going live.
Meta Tags and SEO Fundamentals
Meta tags in the HTML head provide structured information to browsers, search engines, and social platforms. Essential tags include description and canonical URL, which improve search visibility and prevent duplicate content issues. Each page should have unique, concise meta descriptions optimized for user engagement.
Open Graph and Twitter Metadata
Open Graph (OG) and Twitter Card tags enhance link previews on social platforms. These include title, description, URL, type, and optional images, ensuring shared content appears with structured previews and improved discoverability.
Dynamic Metadata Integration
A metadata structure is introduced to dynamically inject SEO-related values per page. The base layout is updated to accept metadata fields, enabling consistent and scalable SEO configuration across homepage, article pages, and other routes.
Robots.txt Configuration
A robots.txt endpoint is implemented to define crawler access rules. The configuration allows all user agents and specifies the location of the sitemap, guiding search engine bots on how to index the site.
Sitemap Generation
A dynamic sitemap.xml endpoint is created to list important URLs, including the homepage and published articles. Each entry includes location, change frequency, last modification date, and priority to help search engines efficiently crawl and prioritize content.
SEO Foundations and Next Steps
The implemented features establish foundational SEO support for discoverability and indexing. The next focus area will be page caching to reduce database load and improve performance.
we are dangerously close to being ready to ship this block to the interweb and actually go live with all the work we have been doing so far. I want to do three more things, technically three more episodes before we start to deploy this application. I want to touch upon SEO optimization. I want to touch upon page caching. I want to touch upon browser caching of our static assets.
Finally I want to touch upon error pages and some logging strategy. So let's jump right into zero optimizations. So here in base layout dot temple we have this header element where we define some meta tags and we haven't really touched on what that is. Meta tags are simply HTML elements that is in this hit
section and it provides information about the web page to the browser but also to search engines and social media platforms which is very important if we want to have some discoverability on the internet. They don't appear on the page itself, it's something that you can view behind the scenes kind of thing and it's just a way to provide some meta information about
the application or the, in this case, block that we are building. The essential meta tags we want is we want a description, a description meta tag that will summarize what is appearing on the page. This is really, really good for search Indians like Google to be able to know what to showcase to users when they search for something that matches the content on your page. So what are the essential
We want to have a description meta tag that summarizes the content on the page so that when search engine crawls, hopefully crawls our page, they have an idea of what is on that page. So whenever a user searches for something that will match the content on our page, we show up in the search results with a small summary. We also want what is known as a
I hope I pronounced this correctly, a canonical URL that tells the search engine which version of a page is the main one and then which one is not the main one. So we don't have duplicate content issues. You should think of these as human first and then search engine second. So the description should entice people to click. So we are technically providing it for search engine, but it will
be used to show to real people in the end. Each page should have unique meta tags, so they accurately describe what is on the page. It can be considered part of your marketing strategy if you want your blog to be successful. It's really important to test how you look in actual search results. So once we are live, you can go in after some time. Once Google or the other Indians have called your page and see
what shows up in the search results and how does it look to users. It should be a way to entice people to click on their links. So what we really want is we want some open graph text, which is often used for Facebook and LinkedIn. There's something like OG Title, open graph, open description. There's also an option to provide a preview image, a URL.
You can specify the type of this tag. That is the Twitter cards, which is Twitter card, Twitter site, Twitter creator, title, description, image, once more. It's a way to show these, to show when we share a link to send one of our articles. It will show up with a nice image and a short title and description of what is on that page.
So we're going to start with the basics. We're going to start with the basics. We're going to add open graphs. And we're going to add some Twitter-specific tags. I'm not going to be adding images because we haven't touched upon that in the course. But feel free to look it up, use the assets package we have to host image in the beginning. And then you can always move up to something more complex, like it's free compatible buckets.
But it's actually fairly easy to add images because we can just shift them as part of our binary. We don't want to use the same meta description on every page. So on every page, that's a big mistake. We want to limit how much or how long it is. That's also why we have the max on the excerpt. So we're going to reusing the excerpt. And we should technically have a shorter version. But this course is always getting
getting quite long. So just be aware that you aim for around 140, 160 characters for the description. We of course allow 255 because that is what we are using to summarize it on our main page. We should not forget to update these meta tags, but
with our setup right now. This would not really be an issue because it's going to be dynamic and be added from what we have in the database. It's going to be injected into the header tag. So we are of course going to be adding this to our base layout component here. We're going to be adding some functionality that we can pass it for each page and have it be set up correctly for the entire block whenever we add a new page. So let's fix
the meta text and then we need to touch upon what is known as a robots.txt file and a sitemap. Let's begin by defining a new structure called meta data. And in here we will have, let's have Twitter creator string, string, string. Let's also have OD type string.
We want OG title string, OG this script. Let's just do this for short. We want OGURL string. We also want OG side name string. It should be... Let's do the lower case there. We want OG local. We actually don't...
We want to set that on every page. We're just going to be dealing with local en underscore us so we can hard code that one. Then we want to say a Twitter title string and Twitter. We technically could have Twitter description, but let's just let's keep it simple and say we have titles, OG description, but just do, let's just do desk.
Right. And we also don't want this one. This looks just about right. So we are going to be seeing a meta and then meta data. Right. And then we need to add in all of these tags. So we can say meta name. Let's have a Twitter creator.
And the content is going to be your Twitter handles, minus MbVisity. Change this for your own. Then we need Twitter title. We want to have Twitter, what more do we have? Is Twitter title, Twitter description. Twitter title, Twitter yes. And
We're going to be saying meta and this just do this here for the title. Let's just use the title that we pass in and do we want here? I have Twitter creators. Let's just do a Twitter creator instead of half card coding it. I need to fix this. There we go.
All right. Then let's add the OG type. Do we have OG type? That's nice. OG type, we want title. This is of course gonna be the OG title. Close, there we go.
OD description, and we're just gonna be using the description like this. And we keep getting these imports, it's just my LSP that is messing up a little bit. OD description, we need OG URL, and do we have the URL we do, and the URL will be the canonical URL plus the,
slug. So I made a mistake up here. We need to have canonic call and then slug string so that the URL becomes canonical plus meta slug. Again, this LSP is messing me up. Right, we have URL. We have site.
a name that is just gonna be hard coded for now because that will be the bleeding edge name here. The local, so local locale. It's gonna be en on the score us. Then we have the meta name description. With the content being meta description.
Yes, close it. Give that a save and then we also want to have the link with the rel attribute being set to canonical. The href want to be, we want it to be basically what we have here. Close this and that is all for
Now we need to go in and add this to all our pages. And we of course have the admin page that is behind our login wall. So it doesn't really matter here. We can just go in and pass an empty metadata because the search engine obviously cannot call this page because it needs to login and it cannot log in. So let's just add
an empty version. That should be it for here. Come on. We have to log in as well. Where we put, provide the data. Let me just flip all of the fields. So it's gonna be mbvisti.
and the type is gonna be websites. Title is gonna be, it's gonna be login. It's gonna be here. We have from the config. I can't import this. Now we're gonna deal with this later. And this lock and the OG side name.
Actually, let's go in here and change the OG side name to just be OG side name because that's just gonna be bleeding it so we don't need to pass that. So the canonical will be config. Can we import the config package? Config, right.
then say config. Let me see what we exported here. We have not exported. So we actually need to have another variable here. And we want it to be called, let's just call it domain, domain, domain. And if not, we're just gonna say localhost.
88. So we can say domain here. Right, this log that we can just pass routes login. And we can say the login page to access the admin of bleeding edge. We don't really need to
provide metadata for our login page. It's not really important that the search engine know about it, but this is a quick example. Then we have home, where we of course also need to add one. So metadata, metadata. And we can grab on the login page here, all of the fields.
And this all stays the same. So this is just going to be home. Or let's call it bleeding edge home page. The domain stays the same. The route will be home page. And we also need to import it. The config package like this. The description will be the home page
of bleeding, bleeding of the bleeding edge, block of bleeding edge, block about technology, technology. Yes, I'm spelling it correctly. There we go.
Then we grab all of this. And we go to our article page here. And we also need to pass this data. This data is the same. OD title. We are going to be using, let's say, article.title. The canonical is still the same. And then we pass in
slug here, and then we pass article.excerpt here. All right, and we also need to import the config package, of course. There we go. Now, this is
basically SEO optimization from a meta text standpoint. It is very simple the way we've done it here, but this is enough to get you going. You can optimize these and you can extend with the topics we talked about, but this is enough to get you going. Next up, we need to tell the robots that crawl or page what they can crawl. And we also want to provide them with a
an easy way to identify pages, which is often done through what is known as a sitemap. So let's deal with that next. We need to create our assets controller and add these methods to our project. All right, so we already have some asset-related controller methods we can borrow from this style and JavaScript one here. So to add the, start with creating the row, let's just call it row.
And in robots, we need to create a robots text where it will contain a user-agent field. It will have an allow field and it will have a sitemap field. All of these needs to have YAML tag so we can convert it.
Right, allow and site map. Great. Then let's say robots, robots, error equals YAML.martial. Let me jump out, restart this and jump into controllers and then go down to robots.
Can we not import YAML? No. All right, let me grab the package that we need. So we need to add this package right here. Let's just go down to our robots and use it so it doesn't disappear from the imports. Robots text error equals YAML, YAML, Marshall. Give that a save.
And now we jump out, say go mod tidy. That should hopefully fix the errors, yes. Then just return an error. Right, so what should go into this yaml.martial? Well, we need to add the robots text here. Fill it out, and the user agent, we're gonna allow all user agents,
on all pages and we're gonna say that the route will be our config.domain plus routes robots which we haven't created yet, we'll create that in just a second. So basically an user agent is a way to tell our website about who you are and often
When you have a user agent that's coming from an actual user, it will be filled out with a lot of details about your OS, your browser type, and all of these kind of things. Our robot will have a very specific user agent, and we're just saying we can have everyone. We could target a specific one and saying, like, only Google. The Google bot can access this. So we can only access these pages, right? We're just going to allow everything so we can hopefully get some people on our
page. Then we remove this and then we simply return string, robot, text and null. Why are we not doing this correctly? Let's do robots and rabbits. This, no.
Of course not, we need to use easy string and say robots, but method status, okay, and then robots. Rabbit in string. There we go. Let's jump into our routes here and specify our robots. That will just go to
slash, what is it, robots.txt? Let me just quickly double check that. Yes, so it goes to robots.txt and now, do we have any errors here? No. So now whenever we run our application, we can see
We go here to our blog, give it a refresh, robots.txt, we see it's not found and it's not found because I haven't registered yet. So let's just register the robots here and then also robots here. Now for refresh. Yeah. So this is what the bot will see. And you can see we go to localhosts, let's robots.txt.
which is of course not correct. I am fumbling things here. It's not going to robots, it's going to sitemap. So we can go to sites, no routes, and add our sitemap that you go to. It should go to sitemap.xml. So sitemap.xml.
Great, and now if we refresh, yes. So now we can tell the agent where it can access this site map. Right, so to create the site map, we are gonna be creating another controller method called site map. And we will be calling this one, let's just call it, we're gonna be calling this URL.
get rid of this, and then we have another one here called Sidemap, where the Sidemap will have, I'm just gonna copy paste here, URL. So we have our name, XMLNS, and then URLs.
I don't worry too much about what this means. It's not really that important. What's important here is these values. So we have a name, we have a location, where you can find it. We have the change frequency, so we can tell the robot how often does this content change, how often should you scrape it, and also when was the last change, and what's the priority of this.
Obviously, the more priority you give it, the more priority the agent will give it as well. So what we can do is say, wow, urls is equal to a slice of url, url. Let me say urls append to urls, url. There we go. Where we will add, let me just fill it out here. So the name,
You're gonna leave out the location. So what will we do? We'll say config.domain. We need to add the home page, which is the first one here. Let's say monthly. Let's say change mod is time.now. This is not gonna work as you want it to, so let's just
I have a string here, 2006, 2002. This should be a more specific value, because our home page doesn't really change that much. But for now, this is going to be fine, and we're going to give it a priority of one. Then we have only our articles, pages, right that we need to fix. So here we're going to say, first we're going to grab
articles, error, and then models, find published articles write, context, c.dbcon, write, and then we say for article in range, articles, and then we grab, let's grab this one, here,
Give that a save. And let's say that is still domain. But we need to add the article.article.luck. Monthly is fine. Let's say last mod is article.updated.add. And let's just pass in 2006. No, we want this to be string.
We give this a priority of one as well. Right, then we can say site. We call this, we call destruct site map is just say XML bytes error and say XML Marshall intent. And then we pass in a site map that we're gonna fill out empty string and then
space the xml name we leave out and then we say Anita to grab this thing out right here yes and then we simply return xml
Come on, XML header plus string XML bytes. All right, no. No, of course we need a blob. We need a blob, so easy blob. Where we pass in what we were just typing out. So again, we say,
HTTP status OK. And let me check my notes here. Then we want to say application slash XML. And then we say bytes and provide it with these values. The value of error. The compiler reminds us to check the error. That's nice.
So now we can go into router and grab this and say the route to sitemap should go to sitemap. Give this a refresh and let's go back into the browser, refresh and let's try and go to this location right here. Not found, okay. Slash sitemap.
That should be correct. Why is it not correct? Let me try and kill the development server. Let's say just run. There we go. And you can see we have all of our articles here. We do have an error. We need a slash.
So inside no controller, side map, side map here. And then we just say plus slash plus article slug. Now we have the correct with a slash. No, we do not. I am forgetting.
to say that it goes to config slash articles slash log. And now I'm doing it right. Okay, let's try and grab one of these links. And we have the correct article. All right, this was probably a lot of information, but this is the basics of what we need for some zero optimization.
I use zero optimization a little loosely here. This is probably zero foundations, but this is a good start to get some visibility for the search engine and have people be able to discover your block. The next thing we're going to do is we're going to be dealing with page caching. For example, we don't want to ping our database necessarily every time we pull out the same article if it has not been updated. It's the same with our site map. We can just put that in a cache and we don't have to
pull out all of our articles each time that site gets visited by an agent because speed also matters here. So next episode we're going to be dealing with page cache so we can have things be a little bit more snappy.