I have a Postgres 14.4 cluster (AWS Aurora) to which I connect from my application using JDBC.
I use liquibase for schema management - not only tables, but also a bunch of SQL stored procedures and functions. Basically, there is one liquibase changeSet that runs last and executes a set of SQL files which contain stored procedures and functions.
CREATE OR REPLACE PROCEDURE foo(arg1 text, arg2 text) LANGUAGE "plpgsql" AS '
BEGIN
// procedure body
END;
';
CREATE OR REPLACE FUNCTION bar(arg1 text, arg2 text) RETURNS record LANGUAGE "plpgsql" AS '
BEGIN
// function body
END;
';
These functions / procedures are replaced ONLY when there is a change in one / more SQL files which are part of this changeSet. (runOnChange: true
).
Whenever I do a rolling deployment of my application (say, with a change in the function body of bar()
), liquibase will execute the CREATE OR REPLACE FUNCTION bar()
as part of a transaction.
In the few milliseconds while bar()
is being replaced, there are other ongoing transactions (from other replicas of my application) which are continuously trying to invoke bar()
.
Only in this tiny time window, few transactions fail with the following error:
ERROR: function bar(arg1 => text, arg2 => text) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 4 : errorCode = 42883
I don’t want any of these transactions to fail (we do not have any proper way of re-trying them from the application layer). This is seen as affecting availability. However, it is acceptable for these transactions to BLOCK (for a few hundred ms) while the SQL function body is being replaced, and then proceed with invocation.
Is there a way to safely modify a stored function / procedure in PostgreSQL while that function / procedure is being invoked continuously by multiple transactions?