Will use Mysql function to restore the dumped file first (instead of using Liquibase to load the dumped file), and then use Liquibase just maintain the changes on top of those large amount of data.
The other method to load data is from a CSV file. But, all that will do is generate SQL like you probably already have. https://www.liquibase.org/documentation/changes/load_data.html. So, loading the data at every Maven runs is probably not the best way.
Maybe have a database dump with the data already in it. Liquibase will then just update the schema based on the version in DATABASECHANGELOG table. You can then restore the database from the dump prior to the Maven execution.