Liquibase Ci CD

I would like to understand what happens to the liquibase update job when a pod hosting liquibase goes down mid-way. For example, I am adding a new non-null column into an existing table with million records. Since its non-null, I am assigning a default value which will be added to all the million rows. Lets say if the pod goes down after adding new column to 100k rows, what happens to the other 900k rows?

Once a “Liquibase” SQL is running it will behave like any other client connection to the database. The DBMS will handle transaction control how it normally would. Speaking for Oracle, the ALTER TABLE transaction would get rolled-back. The only thing specific to Liquibase is that the Liquibase “lock” will be left “on” in the databasechangeloglock table since the Liquibase process was killed, and you will need to run the release-locks command to remove it.

I’d recommend putting a process in place to prevent your pod from stopping if Liquibase is running. See this topic for an example that I have implemented:

Thanks for the reply. Yes I am aware of the lock by liquibase.
So if pod goes down databse would usually rollback, fair enough.
There is also one alternate approach I am considering, other than kubernetes, which is Lambda.
Maximum time lambda can run is 15 minutes. Do you know how to handle longer liquibase updates in such scenarios.

I don’t really know about lamdbas, but database deployments can run hours, so 15 minutes seems much too small. Last week I had a customer run a Liquibase deployment that ran 8 hours building a massive index.

Yes, scenarios like indexing can run into hours. Anyone using lambda for liquibase, please give your inputs how you handled. Thanks.

Can I use AWS Fargate to deploy liquibase instead of lambda. Has anyone used this before.