Cross DBMS usage of createIndex


I am new to liquibase and ran into a problem when trying to set it up.
I am currently testing with liquibase 3.0.4-SNAPSHOT because of a NPE bug.

My problem is related to the fact that some DBMS automatically create indices when adding constraints. I have the use case, that I add an unique constraint and after that an unique index. I am not sure if there are databases out there that do not automatically create an unique index when an unique constraint is created but for now I assume it.

The problem is that, especially in my testing environment where I use H2, the create index fails, because H2 detects that an unique index for the same columns is already present(although the indices have different names).
When I switch to a DBMS that does not automatically create the unique indices, the current changeSet should work.

Shouldn’t the createIndex change be automatically skipped if the index already exists? I could of course wrap every createIndex element in it’s own changeSet with a preCondition that checks if the index exists but that would be very verbose and also somehow DBMS dependent. One of the reasons for using liquibase is DBMS independence therefore I think this should be done automatically by liquibase.

Are there any plans on implementing this behavior?

Does anyone has an idea how to get that behavior from createIndex in code?

I wanted to ask you about the changesets too. Is it best practice to create so many changesets? I thought of a changeset as something that includes the changes of an app version. For example if I want to provide the initial structure of my DB, I thought that having just one changeset is suitable, but as it seems, multiple changesets are somehow encouraged. Also the diff tool produces multiple changesets instead of just one with multiple changes.

What is the best practice?

I have some stubbed logic so far to support automatically skipping indexes, but it hasn’t been fully implemented yet. I created to track the feature.

For now, the best approach is probably to use the dbms attribute on the changesets that do the createIndex. You can use dbms="!h2" as well


It is normally best to create lots of changeSets with just a single change in each. The reason is because while liquibase tries to run the changeSet in a transaction, most databases will auto-commit createTable and other DDL calls. If you have, say, 10 createCable changes in a changeSet and the first 5 execute successfully but the 6th fails, the next time you run liquibase the changeset will not run because the first createTable fails with a “table exists” error and there will be no way to get changes 5-10 to execute.

It can make sense to have multiple insert/update/delete calls in a single changeSet since they are transactional but normally it is best to have just one change per changeset.


Well that was already clear to me, but what I can’t understand is why such a big changeset should fail in the first place? I mean we are all testing that stuff and of course it can fail sometimes, but mostly you will have to do manual changes on a system on which a changeset failed anyway. Even if it did fail, as you pointed out, I thought that the changes are executed within a “transaction” and are therefore rolled back if they fail. If DDL does not support a transaction, changes could be reversed, couldn’t they?
Maybe I am missing the point here but having many changesets feels kind of wrong for me. It would really help me if you could give me an example scenario or point me to one, in which the many changeset approach was actually helpful.

If there is any error executing changes in a changeset, liquibase doesn’t really know how to recover other than just doing a transaction rollback, so it can’t recover from a failed changeSet with two createTables where the first passes and the second fails.

You are right that conceptually there is no difference between one changeset with a lot of changes and many changesets with one change and with testing the changes should always run correctly. However, I’ve seen enough cases where things fail unexpectedly in different deployments that I tend to err on the side of being safe (having lots of changeset).  I’ve seen cases where liquibase updates will suddenly fail because of disk space issues, different permissions and people messing around with the database outside of liquibase and I’d rather have changesets fail more gracefully and recover-ably.