Here at my work, our system can run on three different databases. That is, we can use the system with Oracle, Postgres, or SQL Server.
Until now we haven’t used Liquibase, we’re implementing it now and during this process a question arose about what’s the best way to work with Liquibase in a scenario like this.
-–
Our database today is well structured, and during some tests I encountered some problems, for example:
When executing `generateChangeLog` from an Oracle database, exporting tables, columns, views and primaryKeys I was able to generate a changelog.xml file successfully.
When trying to import this same changelog.xml into a Postgres database, I encountered problems with some views, where the SQL syntax ends up being different. (ok, this was expected).
But what would be the best way to solve this?
Generate a changelog.oracle.xml, a changelog.postgres.xml, a changelog.mssql.xml and handle this with --context?
Or generate a single changelog.xml file for all three databases and handle the changesets with the dbms=$DATABASE tag?
This same problem occurs when exporting some sequences as well, where when exported from Oracle it has some attributes that don’t work on other databases.
Welcome to the forum @bodanesemateus ! The way you are using Liquibase is the original use case that drove its development: one application/service that can support multiple backend databases. There are a few best practices and product conventions to be aware of when using LB this way:
Manage a single (or single set) of changelogs - by colocating all changes, you tend to keep the changes in better alignment for your database. If you manage separate changelogs for each database type, you’re less likely to forget a change for a given platform and it’s easier to confirm that change definitions are accomplishing the same goal regardless of platform.
Work with as few changesets as possible - Not every changeset will require 3 versions, one for each of your supported databases. Only add changesets for specific platforms when you absolutely need to. This simplifies changeset/changelog management by preventing bloat in the changelog. As you mentioned, use dbms attributes to control when duplicated changesets should be executed.
Take advantage of generic type definitions in Liquibase - Liquibase provides some generically defined column data types that will then be converted to the appropriate type for the database being updated. For more information on this consult: Liquibase Data Type Handling
Working in a single changelog using dbms to ensure the right changesets are executed for the database you’re updating is the best approach, especially when you minimize the instances of duplicated changesets to accommodate different database. Good luck!