I am working on an application that will be deployed to a cloud architecture like AWS and this system will have multiple application servers deployed behind a load balancer. All of the app servers connect to the same replicated database on the back-end. I was wondering what the best practice is for running Liquibase in an environment such as this? I want to minimize downtime when I make updates to the application (or eliminate downtime if possible) but I can’t seem to wrap my head around how database updates would work. If I update the database, the running app servers will error and stop working since the model has changed. Which Liquibase method would I use (ant/maven, Spring, etc)? Does anyone have any tips or advice for this kind of situation. Any suggestions would be greatly appreciated. Thanks.
From a liquibase standpoint, there isn’t anything you need to do because liqubase uses a databasechangeloglock table for exactly this situation so the database is only updated by one instance at a time.
Having your application work with a database schema that may not match what it was coded against is a different issue that really falls outside the scope of what liquibase tries to manage.
Depending on your technologies (hibernate, sql, etc) used, using a policy of “only backwards-compatible changes allowed” may be a good start. So, for example, you can’t do a normal column rename because it will break code using the old name. Instead, copy the data over to a new column and in a future release (when all servers are running against the new column) you can drop the old. Using views may help with some of the conversion as well.
Thanks for the reply. I will definitely look into backward-compatible only changes for my schema. I appreciate the feedback.