generateChangeLog for multiple schemas in MySQL


I have an app, not too old, that runs on MySQL with 5 schemas. I was looking to leverage liquibase going forward to manage schema changes, and following the guides it indicated I should first generate the schema from a clean state to make sure all metadata is present. Now, when I connect my app to the db, the JDBC URL is simply jdbc:mysql://localhost:3306/, e.g. no default database selected.

I’ve also at this point hacked up the liquibase code a bit to support schemas as a parameter in generateChangeLog. It looks like it was processed, just no easy way to get the value in. I can see in the code through log output that it is picking up all of the schemas specified. However, I’ve noticed that the schema generated seems to have to do with the JDBC URL. Since the format I use means no DB selected, I’m not getting any diff output. When I do specify a DB there, I only get that schema as output. The problem is that my FKs are cross schema, so I need to generate some of the schemas together.

Is this at all possible to do? Is there a section of code I could look at that might be using the connection DB instead of the specified values?

Of course if I get this stuff working, I’ll see about contributing it back.


The way that Liquibase is designed, it only works with a single schema at a time. I believe that support for multiple schemas would be a difficult change.

Steve Donie
Principal Software Engineer
Datical, Inc.

No comments? Is the JDBC URL supposed to be all that’s exported?

I have a couple years experience using Liquibase with multiple schemas in MySQL and we’ve tried a few things.

First, I haven’t really used generateChangeLog. Instead, for our initial deployment, I’ve been able to reverse engineer one or more of our existing schema via MySQL Workbench into MySQL Workbench, and then forward-engineer a DDL-creation script from MySQL Workbench which you paste into a single Liquibase changeset (with an appropriate custom delimiter). Or split it up into changesets if you have the time/motivation and as you see fit. But then- trust me-don’t ever edit those changesets. Always do all changes as incremental liquibase changeset operations from that point forward unless you want to enter a world of pain.

We basically found two approaches to dealing with multiple schemas:

  1. Deploy multiple schema changes via a single Liquibase deployment operation. (By “liquibase deployment operation” I mean I mean an execution of Liquibase via command line or Maven). Set the JDBC URL to default to some schema where you want the DATABASECHANGELOG to live and within your changeset’s tags explicitly specify the schema names you want to be working with, e.g. CREATE TABLE foo.tablea; CREATE TABLE bar.tableb, etc. (If you don’t specify a schema name explicitly, the operations will occur in the default schema specified by your JDBC URL and/or default MySQL settings.)
  1. Deploy multiple schema changes via multiple Liquibase deployment operations. Set the JDBC URL for each schema to the schema you are deploying and you will end up with a DATABASECHANGELOG table in each schema associated with the changes related to that schema. Don’t specify the schemanames in any of your changesets; control that via the JDBC settings only. You may end up having one master.xml file for each schema however (masterDeploySchemaA.xml, masterDeploySchemaB.xml, masterDeploySchemaC.xml) and you invoke the appropriate one combined with the matching JDBC URL.

Despite starting with approach #1, we found over time that #2 is a bit clearer and less error prone. The problem with approach #1 is that when a DBA or developer backs up and restores a schema, they are not always careful to restore the corresponding DATABASECHANGELOG located in the default/central schema, and that leads to real problems!! We had a nasty production issue crop up once due to that.

Much better to have your DATABASECHANGELOG for all changes related to a schema collocated with the schema itself, and this in turn requires that you invoke Liquibase multiple times, once for each schema and that you have a different JDBC URL for each invocation.

Regarding your concern about foreign keys across schemas, this shouldn’t be a problem. As long as you control the Liquibase ordering of invocation for schema A vs schema B, you can ensure that cross-schema FKs are setup after both the source tables are setup (ie if you deploy schema A first, add the foreign key in B pointing to A as a later changeset associated with schema B.)

If you have a complex nest of foreign keys that prevents a straight sequential deployment of schema A then B (with foreign keys to A) then C (with foreign keys to A or B), you may want to either have additional deployment operations and changesets that do the foreign keys after all the tables are setup (deploySchemaA.xml, deploySchemaB.xml, deploySchemaC.xml, deploySchemaA_FK.xml, deploySchemaB_FK.xml, deploySchemaC_FK.xml).

Or you may want to use one-deployment-per-schema changesets (deploySchemaA.xml, deploySchemaB.xml, deploySchemaC.xml) with FK changesets that employ “runAlways” and “Preconditions” so you only attempt to create the FKs if both source and target tables/columns are present (and the FK is not present). In that case, if you run your deployments twice, any cross-schema FKs not created the first time will get created the second time.