blob / clob support

Hi,

You can insert blobs in Liquibase if you define the value in hex:

       
           
       

However, it is not possible to load file content with the means available. So, an extension: I’ve extended InsertDataChange and overwritten generateStatements in such a way that if the user explicitly defines the column as being of type ‘BLOB’, the value will be considered the path to the file that has to be loaded. The file is loaded and translated to hex. Liquibase-core’s InsertGenerator handles all the rest. This works well, so I did the same for clobs: load file content and use it as the column value.


       
            <column name=“a_blob”
       

Now there does seem to be a problem concerning size. Using an INSERT statement, I believe you are limited to 4000 bytes over JDBC? A clean solution to this means accessing java.sql.PreparedStatement and use the setBlob(…), setClob(…), setXXX(…) methods… By doing so, we also don’t have to bother translating the BLOB to hex.

I got to the point of creating a PreparedStatement, but to execute it, there is a relatively small change in the JdbcExecutor, and one extra interface; which I’ve done, and it works nicely :slight_smile: Could this be added to liquibase core? Perhaps, it would even make sense to update InsertDataChange so loading BLOBs and CLOBs is supported out-of-the-box.

let me know if there is any interest

Cheers,

Bart.


If you have code to commit taht would support loading BLOB/CLOBs more directly, that would be great. I’d like to get better support into the main codebase, I just haven’t had a chance yet.


Nathan

Hi,

It would be great to have it in core! The structure is bit changed now, just to keep up with the approach in core: to load blobs and clobs, use the following structure (xsd change vs 2.1!)

   
            <column name=“a_blob”
   

I haven’t written a test-case yet, but tried this quickly using h2 and it works like a charm!

Have a look at https://github.com/bartolomej/liquibase and pull changes.

Cheers,

Bart.



Hi Bart,

I checked your changes out and it works great. But now I’m missing this feature in the data export too. Right now tables with blob columns seem to be simply ignored in the data export, correct? Would you add this feature too please?


cheers

Sören

Hello,

This looks just like what I need. I tried compiling the code but I kept getting errors.  Just wondering when this changes will be incorporated into a build.


Thank you!


If I do a mvn clean install -DskipTests the project builds just fine, however I think the extension is having trouble finding my blob files.  Does this extension support relative paths for valueBlob?

Thanks!

–Peter

I’m trying to use the 2.1 schema in the same directory as the Jar, and reference it locally in my XML file, but I get Xerces parse errors that valueClob isn’t a valid attribute. Any word on how to get this to work in 2.0.4 with the new schema?

Hi Bart,

do you know why your blob/clob feature got removed again in version 2.0.5? Did it cause some issues?

cheers

Hi Bart,


this issue, importing and exporting blob values for every database is also important for me. 

For me it would be perfect just to put the data as hex code inside the XML file.

I want to initialize the database with test data from a XML changelog file, which I exported from another database, and the export/import feature would be cool.


Bye,

  Stefan


Hi,


I’ve also stumbled upon a situation where a file should be inserted into a blob column. As I see it, this is not specifically a liquibase problem.


The main problem is in the insert statement; MySQL has support for binaries through the hexadecimal literal; the from_base64 is only supported since 5.6. Oracle however has a limit on the JDBC statement (4000 chars), this is a problem for most files. Bart’s solution would work when liquibase has access to the database, though this is never the case for our production environments. Thus, I’ve come up with a strategy that does work through insert statements:

  • insert parts of the base64 encoded file in a temporary table
  • insert the record containing the blob, calling a stored PL/SQL function where the BLOB would be.
  • the function decodes the base64 records from the temp table and appends them to a blob variable, which is returned.
  • for fool-proof-ness, the resulting blob is checked against an expected md5checksum (dbms_crypto)


Now, the secondary problem is in the liquibase changelog. For really managing the changes in the changeset, the hash of the changeset should change when the file changes. My idea is to add an (optional) attribute containing the md5checksum for the file. This is checked against the file itself and also used for the final check described above.


MySQL does not need any additional functions (so far), additional database objects for Oracle are:

  • create global temporary table for base64-blob-parts
  • create or replace package containing encode/decode functionality for exporting/inserting base64-blob-parts

Is this strategy the way-to-go? Any other ideas?


P.S. I’m not sure how other database vendors/product handle BLOBs in an INSERT, MSSQL is a while ago and PostgreSQL has never been part of one of my projects.


My strategy for inserting large strings into oracle is to concat clobs.


e.g. to_clob(‘big’) || to_clob(‘strings’) || to_clob(‘no’) || to_clob(‘problem’)


I’m just slapping these into sql tags at the moment and using dbms=‘oracle’ preconditions.


If the standard insert/update tags could do this by default for large string in oracle it would be awesome.


Cheers,

Pete

Did this ever get integrated? I’m currently on 3.4.1 and don’t see this type of support for these column types.