Aborting Liquibase mid-process affects DATABASECHANGELOG integrity and leaves lock hanging around.

Unfortunately it is normal because there isn’t anything we can do about it. When you hit ctrl-c it kills the java VM without enough warning for liquibase to finish what it is doing and properly clean up.


The normal flow is:

  1. Mark databasechangeloglock table as locked to prevent multiple liquibase instances from updating the same database concurrently.

  2. Begin transaction

  3. Execute each change in a changeSet. Note: Most liquibase changes are DDL statements that autocommit

  4. Insert into databasechangelog that changeSet ran

  5. Commit transaction.


It sounds like in your case java was killed during step 4. The changes in step 3 were auto-committed, but the changeSet mark was not committed.


Unfortunately, I think the only way to prevent it is to not stop liquibase mid-update.


Nathan

Hi there,

We just started using Liquibase and our QA ran into the following problem where aborting Liquibase mid-process leaves a lock hanging around as well as doesn’t properly log which changesets were successfully executed.

This is on Ubuntu 8.04 LTS running MySQL 5.0.51a

1.  From command-line, start liquibase process, e.g.
    $ liquibase --changeLogFile=update.xml update

2.  Hit Ctrl-C during process

3.  After-effects:
    a.  Leaves a DATABASECHANGELOGLOCK record
    b.  Doesn’t properly record the last changeset that was executed successfully when abort occurred.

4.  Trying to run the same command doesn’t work becuase of lock (which I can clear easilty with liquibase releaselocks) as well as will generate errors because it tries to execute a changeSet again, when in fact it was run already (i.e. an alter table command)

Is this normal behavior?  Is there something else I should be doing to prevent this?

Thanks,
Angie



Unfortunately, it is normal because there isn’t anything we can do about it. When you hit ctrl-c it kills the java VM without enough warning for liquibase to finish what it is doing and properly clean up.

Well, you could register a shutdown hook.

It’s not only just killing the process.  What if the db connection is severed somehow, say from the database side, or the network went down.  Basically anything which cause the liquibase process to barf out.

Anyways, I did add to each changeSet to try and make them more idempotent.  E.g. any alter table changesets now first have a precondition checking if the column exists first already.

I think one improvement is if the DATABASECHANGELOG logged when it started and finished each changeset.  I know the elapsed time is printed in the logs, but it would also be useful to have the info in the log table itself.  In your flow, add step 2a.  Insert into databasechangelog that changeSet is being run.  Then step 4 would be update databasechangelog that changeSet ran.

p.s. that comment was from me (now logged in). 

The shutdown hook may help. I’d be afraid to do it in general, but in the command line, ant, and maven versions it should be good.


The “change-log started” column could be helpful too.  I’ll create issues for both of them.


Nathan

We definitely could, although then you begin to get different logic between different databases. It may be worth implementing in a future revision, however, since it can be an issue.


Nathan

Couldn’t liquibase use a database specific locking strategy rather then a custom lock table that is vendor neutral?

So, for mysql db’s use LOCK TABLE databaseloglock. Then if the connection breaks, the lock is released and the transaction is rolled back.

For db’s that support table locking (i.e. mysql,oracle,…) we won’t have to settle for locked tables in response to connection breaks.

Frank