Yep, we did it.
Our current changelog is composed of a root xml containing tags for several other changelogs reaching some tenths of megabytes. Running Liquibase 1.8.1 Ant task, using Eclipse 3.3.2, Ant 1.7.0 and JDK 1.6.0_03, results in the following error:
...
[migrateDatabase] Reading from DATABASECHANGELOG
[migrateDatabase] Release Database Lock
[migrateDatabase] Successfully released change log lock
BUILD FAILED
C:\Dados\workspace-europa\DmView-Enterprise-HEAD\data\build.xml:326: java.lang.OutOfMemoryError: Java heap space
It runs fine when I increase the JVM memory with “-Xmx128m”. My only question/suggestion here is: would it be possible to improve Liquibase memory management? Loading the changelog on demand, for example.
Cheers
Congratulations on the large changelog! 
I am sure there are places we can improve the memory management, however I’m not sure if there are substantial memory improvements we could make. We have considered supporting large changesets when designing our architecture, but we have not ran it through a memory profiler recently.
Currently we use a SAX parser to read the XML, but generate an in-memory DatabaseChangeLog object that contains the entire changelog. My concern with going to an on-demand model is that we need to make several passes through the changelog (to validate non-duplicate ids, check for required fields, execute changes, etc) and that going back and re-parsing the XML for each of those passes will be too much overhead.
I am considering re-evaluating how we build up the databasechangelog object for liquibase 2.0. There may be opportunities there to improve our memory usage, although if anything I plan to optimize for developer productivity/maintenance over memory/speed. With the new pluggable parser system, however, we could offer two parsers, one optimized for reading the performance (parse once, read many times) and one optimized for memory (re-parse and don’t store on each read).
Nathan