The setup is personal and is not related to question. Both files have preconditions inside. Those check if the previous release changesets were applied. See oracle tutorial for this checks.
Now if I run this file I get precondition failed. This happens because preconditions in BOTH referenced files are checked before executing anything. Why? If I run them separatly everything is ok. But if I decide to run both it should also work!
It is clear that the second depends on the first one. For me normal order will be:
First one should be checked for its precondition
If ok, execute 1st one, if not fail
Second file’s precondition should be checked
If ok, execute 2nd one, if not fail
and so on.
Can this be solved somehow? I need this setup, as I have to master files - one able to update current release changesets, the other one (this example) should be able to update both previous and current in one run.
It seems that all my posts here are: "I ran in to the same problem"
But, yes I confirm that I noticed exactly the same.
I would expect the same behaviour as you describe: check the preconditions at the moment the script is about to be run, not completely installed trough LiquiBase. If you have LiquiBase scripts to create objects that may already exist without being created by LiquiBase, then it is nice to check that in a precondition. But if everything that you want LiquiBase to create is indeed created through LiquiBase, then the normal log mechanism takes care of avoiding creating the same things again. Test: Run your complete install, then run any of your included files again: nothing has actually happened. Visualise this with the updateSQL command: The file only contains comments.
I understand, but imagine a situation where I have a very outdated db that wasn’t updated for 3 releases. If I run now previous our current one without precondition it will succeed. And this will be wrong! Install script is ok, but I need to know when my db is really outdated so that I cannot use my update scripts anymore and have to use install one. We have a big number of dbs and can’t have track of all of them.
So back to my initial question? Is this liquibase behavior normal and if yes then why?? Or is this a bug?
It is the expected behavior to run all the preconditions across all the changelogs before executing any of them. The rationale is that The job of the changeLogs is to get your database to the current state, and the preconditions within the changeLogs should be doing a check for things that would keep them from being able to be ran (wrong database type, tables already existing, etc). We check them all first because we don’t want to get into a case where you start updating your database and half way through run into a problem and can’t finish the upgrade. That would leave you in a bad state.
That being said, it would be a good feature to add support to have “skip this included changelog if it fails” support for the preconditions. I think there is a feature request in jira for that, but it hasn’t been implemented yet.
I think you misunderstood my idea. I completely agree with you that before running changesets we have to check first the preconditions to avoid bad state of the database. But imagine I have very outdated database. If I start the liquibase with up to date changelogs it will fail as I might have new preconditions in the latest changelog file. I think this is not correct to check it before running anything.
Why can’t it be checked on changelog file? It will never lead to the problems you mentioned. Imagine the following situation:
Do you think it is fair to block all of them? Can it break anything you mentioned before if we run and check them in sequence?
I think blocking all or blocking some should be an option to the changelog writer. I created https://liquibase.jira.com/browse/CORE-1134 to track the feature request. We need to keep them blocking the way they do currently for backwards compatibility, but we can certainly make that configurable.