Local vs. production liquibase model

In my company, we have a Spring Boot application with maven that we run daily on IntelliJ for development. But when it comes to staging and production we use docker and docker-compose. Currently, we are doing the migrations manually through SQL and bash scripts. We want to migrate to Liquibase.

After investigation, I decided to use the Liquibase maven plugin only without the dependency for development, as I don’t want the migrations to be executed automatically at all. I created a database folder that contains the liquibase.properties file in addition to the change logs. For development, the development team will execute mvn liquibase:update normally and other Liquibase commands.

For the production and staging, I decided to use the Liquibase docker image and mount the changelogs folder and pass the Liquibase properties through the docker-compose.

Is this implementation correct, using mvn plugin for development and docker-compose with mounts for production and staging? or there is a better way to achieve this.

Hi @a.saeed,

Thanks for the question. I always like to make my early-stage executions as similar to production deployment as possible. I find that doing so allows my teams to find problems in the deployment process sooner rather than later. I think this is an example of “shift left thinking.”

Practically speaking, though, there have been plenty of situations where I have developers doing one thing and then the continuous integration and delivery pipeline doing something different. However, I want to execute that “production-like deployment” as early as possible in the pipeline. I want to catch problems as soon as they are introduced.

So, specifically in this case, I would have a test run of the “Liquibase docker image and mount the changelogs folder…” approach as one of the first stages in my CI pipeline, ideally after every commit. Just to make sure that nothing broke.

Does that make sense?

- PJ