We have some strange behavior with the table databasechangelog. We have duplicated records in the table. The “normal state” of the table is 98 records. Our stage database has 195 records. Our production database has 100 records. We can see the duplicates in our databases on stage and prod, but all these databases use the same master.xml file for liquibase. We used liquibase from 3.8.6 (maybe even older) and now we use 4.6.2 , database - Postgres 12.6
Can you explain the behavior? Will we have some problems in the future because of these duplicates? Should we do anything with them?
Please, let me know if you have any additional questions.
@Captain1653 could you add an example of a duplicate case (feel free to change the details (i.e author, filename) if this is sensitive data) from databasechangelog table?
I am curious if the filename field is a little different (or not).
@Captain1653 hmm, interesting. Could you also add the author field to the select list (asking because id, author and filename attributes and their uniqueness is one of the most important things to look into).
Would also help including dateexecuted and deployment_id (or feel free to take the easy path and include all).
On a side note, regarding the existent duplicates and future deployments, I would leave them there for the time being until a root cause is identified for this particular case.
Liquibase checks this databasechangelog table and based on the existence of the combination of id, author and filename decides if a particular changeset is executed or not on that target DB (unless of course this is “overwritten” with runAlways properties etc etc).
So for the moment and in the future, things are safe, but again, once a root cause is identified and you can know for sure that the id, author, filename that you decide to keep in the databasechangelog table have associated changesets (having exact values) in the changelogs then, of course, a sanitization can be done, to get rid of the unneeded duplicates.
User gets impatient, so they start another Liquibase “update”, while the first is still running, this deployment fails due to “unable to acquire Liquibase lock”, as it should.
User runs “release-locks”, not realizing the first deployment is still running, and needs the lock.
User run Liquibase “update”. Now since they removed the lock that should be blocking this update, they end up with duplicate executions, and duplicate rows in the databasechangelog table.
Sometimes we had an issue during deployment, when app didn’t run because it was not able to acquire lock. The reason was that has already been a record in databasechangeloglock. So I had to remove it manually. It is possible that somehow, two same [main]s are started when deploying application? Because I see this in log
2022-10-13 11:24:09,236 INFO org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext [main] Root WebApplicationContext: initialization completed in 11036 ms
2022-10-13 11:24:11,159 INFO liquibase.logging.core.JavaLogger [main] Clearing database change log checksums
2022-10-13 11:24:11,640 INFO liquibase.logging.core.JavaLogger [main] Successfully acquired change log lock
2022-10-13 11:24:11,982 INFO liquibase.logging.core.JavaLogger [main] Successfully released change log lock
2022-10-13 11:24:11,998 INFO liquibase.logging.core.JavaLogger [main] Successfully acquired change log lock
2022-10-13 11:24:34,361 INFO liquibase.logging.core.JavaLogger [main] Reading from gims.DATABASECHANGELOG
2022-10-13 11:27:47,652 INFO liquibase.logging.core.JavaLogger [main] Reading from gims.DATABASECHANGELOG
Now I start to think that I should not see duplicate records in our log.
I would say that manually removing the lock would likely be the cause. You should never manually remove the Liquibase lock with being 100% certain that Liqubase is not still running in the database.
You’ll get this message when two or more Liquibase instances are trying to manage the same Schema simultaneously. This behavior is intentional. Very infrequently it’s possible that you could have a Liquibase command that started but the process got killed and didn’t unlock the lock table. What do you think is happening in your case? Do you have multiple Liquibase instances running at the same time or are the Liquibase jobs getting terminated while in process?
I now don’t know which post do to answer
The db duplicity or log duplicity?
I think database duplicity might be result of process that got killed and didn’t unlock the lock table.
Log might be problem of multiple instances, however I don’t know why they should be multiple.
I’m running into this same issue as well, except in Jenkins.
All of the changesets in my 39 changelogs are being created twice in the DATABASECHANGLEOG table after modifying 2 of my changelog files with additional updates.
The DATABASECHANGELOGLOCK table was not locked
No running process was still executing
I am also experiencing the same issue with PyLiquibase where changesets are being executed multiple times, leading to duplicacy in database. This issue arises when using the Pyliquibase library in a Python environment to execute Liquibase commands on multiple AWS instances.
I encounter duplicacy in the execution of changesets. However, the issue seems to be resolved when the order of the commands is changed. Here is the order for pyliquibase cli commands in python