IndexExists performance

The precondition “IndexExists” is building a full database snapshot for each check.
Sadly, the JDBC api doesn’t have simple getIndex function.

The best solution would be to implement some database specific query for each RDBMS (like )
For the generic implementation, may be we can build a reduced snapshot without columns and views ?

Any comment ?

Yes, the indexExists precondition needs some performance improvements.  I made sure the precondition APIs would support it, but had to put off implementing it until 2.1 in order to get 2.0 out the door.  Like you said, there is no jdbc standard, so it will be a bit of work.

Cutting back on what gets snapshot-ed may be a good idea.  We may need use column information to determine if an index is a stand alone index or a PK or FK index, though.  Views could definitely be skipped, though.

If you know the SQL for your particular database, you could subclass the snapshot class for your database type (see http://  If you could send it along to us as well, that would be great.  Otherwise, I plan to look at it for 2.1.


I’m using Liquibase 3.2.0 snapshot and also suffering from severely slow performance in the index checks. I did some digging and found the query (Oracle) and stack trace is where the slowness appears to be. In my environment I was finding this particular query was taking between 20 and 30 minutes.

SQL Query:

SELECT  NULL                                                   AS pktable_cat  ,

        p.owner                                                AS pktable_schem,

        p.table_name                                           AS pktable_name ,

        pc.column_name                                         AS pkcolumn_name,

        NULL                                                   AS fktable_cat  ,

        f.owner                                                AS fktable_schem,

        f.table_name                                           AS fktable_name ,

        fc.column_name                                         AS fkcolumn_name,

        fc.position                                            AS key_seq      ,

        NULL                                                   AS update_rule  ,

        DECODE (f.delete_rule, ‘CASCADE’, 0, ‘SET NULL’, 2, 1) AS delete_rule  ,

        f.constraint_name                                      AS fk_name      ,

        p.constraint_name                                      AS pk_name      ,

        DECODE(f.deferrable, ‘DEFERRABLE’, 5, ‘NOT DEFERRABLE’, 7, ‘DEFERRED’, 6) deferrability

FROM    all_cons_columns pc,

        all_constraints p  ,

        all_cons_columns fc,

        all_constraints f

WHERE   1                      = 1

        AND p.owner            = ‘OPENTWINS_STAGING_SBS’

        AND f.constraint_type  = ‘R’

        AND p.owner            = f.r_owner

        AND p.constraint_name  = f.r_constraint_name

        AND p.constraint_type IN (‘P’, ‘U’)

        AND pc.owner           = p.owner

        AND pc.constraint_name = p.constraint_name

        AND pc.table_name      = p.table_name

        AND fc.owner           = f.owner

        AND fc.constraint_name = f.constraint_name

        AND fc.table_name      = f.table_name

        AND fc.position        = pc.position

ORDER BY fktable_schem,

        fktable_name  ,

        key_seq ;

Stack trace below:

Daemon Thread [[ACTIVE] ExecuteThread: ‘0’ for queue: ‘weblogic.kernel.Default (self-tuning)’] (Suspended) appears to also be related to this issue.

Thanks. I created to track the fix. 

It takes that long to run on your system a single time? Or is the query ran multiple times that add up to 20-30 mins?


Thanks, I’ll take a look at what you are doing and see what makes sense to move back into the main repo.


This query alone took 20-30 minutes on our database for a single invocation. We have a large database and it is copied per developer on the same database, under a different user/schema area. 

I began writing some Oracle preconditions as a liquibase plugin to speed things up, they appear to be running much faster for us.