Database Migrations Strategy
The application must apply database migrations before running in production. Migrations can be executed via a deployment pipeline or automatically on application startup. They must be backward-compatible and applied incrementally to avoid breaking a running system.
Implementing Migrations with Goose
A runMigrations function is created using the Goose library with PostgreSQL. Migration files are embedded using Go's embed package, and migrations are executed programmatically during application startup with proper error handling and logging.
Timestamp vs Sequential Migration IDs
During development, timestamp-based migrations prevent conflicts between multiple developers. Before production, migrations are converted to sequential numbering to ensure deterministic ordering, simpler auditing, and eliminate merge conflicts.
Rebuilding and Resetting the Database
After converting migration IDs, the database must be dropped and recreated to reapply migrations cleanly. This ensures consistency with the new sequential versioning scheme.
Building a Production Binary
A build command compiles a static Linux binary using CGO_ENABLED=0 and cross-compilation. Linker flags strip debugging information to reduce binary size, and -mod=readonly ensures reproducible builds.
Initial Admin User Creation
On first deployment, an admin user is manually inserted into the production database. This temporary step simplifies initial access and is removed after deployment.
Deploying the Application Binary
The compiled binary is transferred to the server using scp with an SSH configuration. This prepares the application for execution on the production server.
Right, we are super close to being able to run our application live on our server, but we need to set up a SystemD service to run the application for us, and we also need our migration to be applied to the database on the server. So let's start by dealing with the migrations, and then I will explain what a SystemD service is and how to configure it.
There's multiple ways you can apply migration. You can have it as part of your deployment pipeline where you have a GitHub action or something similar that locks into the server or the database, runs the migrations, and then continue on. Or for similar things like this, we can simply have it run on startup because it just looks if there's anything that needs to get applied. If not, then it just continues on in the process. You have to think of migrations as changing the wheels of a running vehicle.
That means that you can never, sometimes maybe you have to, but as a default, never make breaking changes that you can't apply to a running application. So if you have to break something, you will do it step by step. So you make changes gradually so that the car keeps running without breaking. To do this, we can go into command app main.go here and create a new file, a new function, sorry, called run my
that accept the context.context and the psql from database psql it will turn an error and then we are going to be using the Goose library to actually apply migration so we're going to be needing to grab this one so we can open db open db from
because we are going to be passing our connection is a pool connection. We're going to say goose provider or ever say goose new provider say goose dialects Postgres pass the DB and then we want to pass in migration the migration files.
that we still need to embed. We're gonna embed them just like we do as it in just a second. Finally, say with the both set to true and then close it. Right, let's quickly add this. So in database slash migrations, we can jump out, create a file called migrations.go in package migrations.
We simply say, go embeds, everything with a dot hql extension, say var files and then embed dot if s. So just like we do with assets. Great. Then we check for the error. We import the migration files that we just created. We then say if underscore error, then we say goose.
Provider up past the context. Check if the error is not null. Return error. And then say, slug info context. CTX up migrations applied. Finally, just return a null. Then we need to call this, so we say, if error run migration CTX.
error is not null, we return an error. And we should also pass in the DB instance we created here. And let's just add some locking as well here because we're about to go into production, could not create new config. So with error, error. And could not run application.
with this error. Right, we also need to talk about our migrations. Currently, all of our migrations have timestamps. And that is typically a default case because we have multiple developers that are working on branches simultaneously. They are all creating migrations. And if everyone use sequential numbers, we might have two developers that create our migrations, say,
Like this, the next one here will be 0, 0, 0, 5. And then those two would clash and cause merge conflicts. So that's why the default is to use timestamp. However, once something is ready to merge and go into production, we want to use sequential numbers because it makes it more obvious what has been ran. So instead of this timestamp, we can have one, two, three, four, five, whatever, that ensure that the auditing is simpler. And we also have some deterministic ordering
Plus we have no more conflicts of risk of conflict anymore since the code has already been merged, right? So when we work with code, we will work it like this, and then before merging, we will convert it and then use the sequential ordering. We could technically here just go with sequential numbers, sorry with timestamps, because we are only the only developer, but I wanna,
show it the right way, so to speak. So we simply go in here and add another just command, fix migrations, fix migrations, so just fix migrations. Right, you can see we now have one, two, three, four, five. Then let's just quickly res, no, let's drop the DB because we have already applied the migrations with the timestamps so we cannot run them now. So we have to say m,
Masterful Stack O-Lang cause. We create it, then say up migrations, and we can also see like up successfully migrated database to version four. Then we can say just seed, and we are back. Next, we also need to build our application binary. And to do that, we can create another
Just come on, so let's say it's called build app, where we're gonna say, I'm gonna copy paste, no I'm not, I'm just gonna write it out for your cgo, enabled zero, go os, it's gonna be a linux, then we're gonna say go build, ld flags, dash s, dash v, mod sweet only, v and out, it's gonna be bin,
app and we're going to build from the entry point of app command app main.go cgo enable zero simply disable cgo which produces a pure go binary with no c dependencies so we have an essentially a static binary that can run anywhere we are cross compiling for linux which is the over that we're going to be deploying on the go build is the core command that trigger this whole thing
Then we add some linker flags that strips the binary from the symbol table and also removes something called dwarf debugging info, which results in a significantly smaller binary. Then we say dash mod equals read only, which prevents go build from modifying go.mod or go.sum. So this is really useful for reproducible builds. We then ask for both output
We ask it to output the results in bin slash app. So the binary recalls app. Finally, we specify the entry point. Now, when we actually deploy this on the server, we also need to run the migrations. We already fixed that, but we also want the user, the admin user, to be created because we need an admin user before we can access the admin. The way I typically do this is I simply go in and grab
what we have here, and then the first deployment, I include this. So we do like this. Of course we need to be careful not to cry and to continue to recreate the user, but this is much easier just to create one deployment, run the migrations, add the user, delete this, and then we don't need to worry about it again because we have the
We have to solve them. We have the peppering going on with the passwords. We can just insert a record directly in the production database table. We could create one locally and then insert that into the database. That's also an option, but I think this is significantly easier. Right. Let's say just build app. Check the binary directory. You have it right there.
Then we can use something called SCP to copy the file locally from our machine and up onto the server. And we specify the file that the binary one is the app. And then we can use our SSH config by specifying bleeding edge. And then the path that you want to copy this to, which is going to be home admin. This is a little bit easier than specifying all of the path to the public key and all those kind of things. So we just use bleeding. It's like this.
We should have it copy pasted onto the server. Let's just verify. And there it is.