Best practice for zero-downtime crypto migration: <customChange> + DB Triggers?

Hi everyone,

I am planning a zero-downtime rolling upgrade for an application where we are changing our encryption algorithm. I need to migrate millions of encrypted rows in our database while both “Old Nodes” (using v1 crypto) and “New Nodes” (using v2 crypto) are actively writing to the database.

Here is the scenario and my proposed Liquibase implementation. I would love some feedback on whether this is the recommended approach or if there is a better pattern.

The Scenario

We have a table my_secure_table with an old_col (encrypted with algorithm v1). During the upgrade, we need to:

  1. Create a new_col (for algorithm v2).

  2. Bulk decrypt the existing old_col data and re-encrypt it into new_col.

  3. Keep both columns in sync during the rolling upgrade (if an old node writes to old_col, it must be synced to new_col, and vice versa).

My Proposed Liquibase Solution

I am planning to use a single changelog.xml with three changesets:

1. Schema Update Using standard <addColumn> to create new_col.

2. The Bulk Data Migration (One-Time) Since encryption/decryption requires our application’s Java crypto libraries, I plan to use a <customChange> class. This Java class will execute a SELECT, perform the crypto transformations in memory, and run UPDATE statements to backfill new_col. Note: This only runs once during deployment.

3. The Ongoing Dual-Write Sync (Runtime) To handle the mixed writes from the old and new nodes during the rolling upgrade, I plan to use the <sql> tag to deploy a database-level BEFORE INSERT OR UPDATE trigger. If the DB supports it (e.g., Postgres PL/Java or Oracle JVM), this trigger will call a database-hosted Java function to handle the crypto sync dynamically on every row insertion.

My Questions for the Community:

  1. Massive Backfills: Is <customChange> the recommended Liquibase pattern for doing heavy, millions-of-rows data migrations that require external Java libraries? Are there memory/transaction timeout gotchas I should be aware of?

  2. Triggers for Dual-Writes: Has anyone successfully used Liquibase to deploy DB-level triggers specifically for rolling-upgrade dual-writes? Or is it highly preferred to handle the dual-write logic entirely in the application layer (e.g., Hibernate @PrePersist) instead of using DB triggers?

  3. Alternative Patterns: Is there a more standard “Liquibase way” to handle this kind of rolling encryption migration?

Thanks in advance for any insights!