Duplicated records in the table databasechangelog

Hi Liquibase Team.

We have some strange behavior with the table databasechangelog. We have duplicated records in the table. The “normal state” of the table is 98 records. Our stage database has 195 records. Our production database has 100 records. We can see the duplicates in our databases on stage and prod, but all these databases use the same master.xml file for liquibase. We used liquibase from 3.8.6 (maybe even older) and now we use 4.6.2 , database - Postgres 12.6

Can you explain the behavior? Will we have some problems in the future because of these duplicates? Should we do anything with them?

Please, let me know if you have any additional questions.

Kind regards,
Captain1653

@Captain1653 could you add an example of a duplicate case (feel free to change the details (i.e author, filename) if this is sensitive data) from databasechangelog table?
I am curious if the filename field is a little different (or not).

Thanks,
Eduard

@EduardUta thank you for the quick reply.

I have attached the screenshot of our stage database. Is it enough for you?

Kind regards,
Captain1653

@Captain1653 hmm, interesting. Could you also add the author field to the select list (asking because id, author and filename attributes and their uniqueness is one of the most important things to look into).
Would also help including dateexecuted and deployment_id (or feel free to take the easy path and include all).
On a side note, regarding the existent duplicates and future deployments, I would leave them there for the time being until a root cause is identified for this particular case.
Liquibase checks this databasechangelog table and based on the existence of the combination of id, author and filename decides if a particular changeset is executed or not on that target DB (unless of course this is “overwritten” with runAlways properties etc etc).
So for the moment and in the future, things are safe, but again, once a root cause is identified and you can know for sure that the id, author, filename that you decide to keep in the databasechangelog table have associated changesets (having exact values) in the changelogs then, of course, a sanitization can be done, to get rid of the unneeded duplicates.

Thanks,
Eduard

@EduardUta Thank you for your explanation.

I have added those fields and liquibase too (maybe it will be useful).

Please, let me know if you have any additional questions.

Kind regards,
Captain1653

We have the same problem. Even worse. Some records shows different time.

ID AUTHOR FILENAME LIQUIBASE DATEEXECUTED ORDEREXECUTED EXECTYPE MD5SUM
1565954839448-1 kasala (generated) db/changelog/ddl_changes/v0.0.1-db-ddl-user-role-project.xml 3.6.3 2020-11-12 07:15:57 1 EXECUTED 8:5ec3b3dba6240dbd9d5911582536aa55
1565954839448-1 kasala (generated) db/changelog/ddl_changes/v0.0.1-db-ddl-user-role-project.xml 3.6.3 2020-11-12 06:43:56 1 EXECUTED 8:5ec3b3dba6240dbd9d5911582536aa55
1565954839448-1 kasala (generated) db/changelog/ddl_changes/v0.0.1-db-ddl-user-role-project.xml 3.6.3 2020-11-12 06:43:56 1 EXECUTED 8:5ec3b3dba6240dbd9d5911582536aa55

we found it just now, even when the problem looks like to be older date and across multiple versions

ID AUTHOR FILENAME LIQUIBASE DATEEXECUTED ORDEREXECUTED EXECTYPE MD5SUM
20220303-1 xxx db/changelog/ddl_changes/2022-03-03-ddl-remove-lab-hierarchy.xml 4.5.0 2022-04-11 14:34:07 1763 EXECUTED 8:03e8a367b079b2d16a5853c3508ba407
20220303-1 xxx db/changelog/ddl_changes/2022-03-03-ddl-remove-lab-hierarchy.xml 4.5.0 2022-05-16 17:22:32 1761 EXECUTED 8:03e8a367b079b2d16a5853c3508ba407

This is quite a fail. We would like to know why there is no unique constraint over columns (id, author, filename)

I have seen this occur under this scenario:

  1. User executes a long-running Liquibase “update”.
  2. User gets impatient, so they start another Liquibase “update”, while the first is still running, this deployment fails due to “unable to acquire Liquibase lock”, as it should.
  3. User runs “release-locks”, not realizing the first deployment is still running, and needs the lock.
  4. User run Liquibase “update”. Now since they removed the lock that should be blocking this update, they end up with duplicate executions, and duplicate rows in the databasechangelog table.

Sometimes we had an issue during deployment, when app didn’t run because it was not able to acquire lock. The reason was that has already been a record in databasechangeloglock. So I had to remove it manually. It is possible that somehow, two same [main]s are started when deploying application? Because I see this in log

2022-10-13 11:24:09,236 INFO org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext [main] Root WebApplicationContext: initialization completed in 11036 ms
2022-10-13 11:24:11,159 INFO liquibase.logging.core.JavaLogger [main] Clearing database change log checksums
2022-10-13 11:24:11,640 INFO liquibase.logging.core.JavaLogger [main] Successfully acquired change log lock
2022-10-13 11:24:11,982 INFO liquibase.logging.core.JavaLogger [main] Successfully released change log lock
2022-10-13 11:24:11,998 INFO liquibase.logging.core.JavaLogger [main] Successfully acquired change log lock
2022-10-13 11:24:34,361 INFO liquibase.logging.core.JavaLogger [main] Reading from gims.DATABASECHANGELOG
2022-10-13 11:27:47,652 INFO liquibase.logging.core.JavaLogger [main] Reading from gims.DATABASECHANGELOG

Now I start to think that I should not see duplicate records in our log.

I would say that manually removing the lock would likely be the cause. You should never manually remove the Liquibase lock with being 100% certain that Liqubase is not still running in the database.

1 Like

You’ll get this message when two or more Liquibase instances are trying to manage the same Schema simultaneously. This behavior is intentional. Very infrequently it’s possible that you could have a Liquibase command that started but the process got killed and didn’t unlock the lock table. What do you think is happening in your case? Do you have multiple Liquibase instances running at the same time or are the Liquibase jobs getting terminated while in process?

I now don’t know which post do to answer :wink:
The db duplicity or log duplicity?

I think database duplicity might be result of process that got killed and didn’t unlock the lock table.
Log might be problem of multiple instances, however I don’t know why they should be multiple.

I’m running into this same issue as well, except in Jenkins.

All of the changesets in my 39 changelogs are being created twice in the DATABASECHANGLEOG table after modifying 2 of my changelog files with additional updates.

The DATABASECHANGELOGLOCK table was not locked
No running process was still executing

Fro starters, please provide the following:

  • dbms type and version
  • full liquibase CLI executed.