首页 > 解决方案 > 当我明确锁定整个表时,为什么 postgres 会同时给出行更新错误

问题描述

我有一个nodejs执行以下操作的应用程序:

data = [someObjectWithDate1, someObjectWithDate2, someObjectWithDate3, ....]
//item contains age group
data.forEach(async(item)=>{
    //db connection is already obtained at this point and it allows raw queries
    //typically a knex instance wrapper function/object
    await dbConnection().raw(  
    `
    CREATE OR REPLACE FUNCTION my_function() RETURNS VOID LANGUAGE plpgsql AS $$
      DECLARE
         my_integer integer=${item.age_group};
         my_record record; 
      BEGIN
            SELECT * FROM my_table INTO my_record WHERE age_group=32;
            --some weird calculation but, it's only a sample
            UPDATE my_table SET age_group = my_record.age_group + my_integer 
                  WHERE age_group = my_integer;
      END;
    $$;
    BEGIN;
       LOCK TABLE my_table IN ACCESS EXCLUSIVE LOCK;
        SELECT * FROM my_function();
    COMMIT;
    `
    )
})  

现在,无论有多少异步函数并行执行,postgres 都应该阻止并发事务并且不会出现任何并发问题,对吗?(尽管会妨碍性能,此时这是不必要的)。但是,当传入大量数组时data,出现以下错误:

tuple concurrently updated {
   "length": 90,
   "name": "error",
"severity": "ERROR",
"code": "XX000",
"file": "heapam.c",
"line": "3888",
"routine": "simple_heap_update",
"stack": "error: \n ..... [ the query above is printed here as well as the following:]-tuple
concurrently updated\n    at Parser.parseErrorMessage 
(/app/node_modules/pg-protocol/src/parser.ts:357:11)\n    at 
Parser.handlePacket (/app/node_modules/pg-protocol
/src/parser.ts:186:21)\n    at Parser.parse (/app/node_modules/pg-
protocol/src/parser.ts:101:30)\n    at Socket.<anonymous> (/app/node_modules/pg-protocol/src/index.ts:7:48)\n    at Socket.emit (events.js:375:28)\n    at Socket.emit (domain.js:470:12)\n    at addChunk (internal/streams/readable.js:290:12)\n    at 
readableAddChunk (internal/streams/readable.js:265:9)\n    at 
Socket.Readable.push (internal/streams/readable.js:204:10)\n    at 
TCP.onStreamRead (internal/stream_base_commons.js:188:23)     

它根本不起作用,这背后的原因是什么?

更新:我还按照下面评论中的建议包装了函数调用advisory locks,但是,当使用足够大的数组时,它仍然给我上面的错误:

BEGIN;    
    SELECT pg_advisory_xact_lock(2142616474639426746);
    SELECT * FROM my_function();
COMMIT;

CREATE OR REPLACE FUNCTION my_function() ....

标签: node.jspostgresqlmultithreadingasynchronousio

解决方案


感谢@sudo 上面的评论,如果我移动了这样的语句:

BEGIN;    
    SELECT pg_advisory_xact_lock(2142616474639426746);
    CREATE OR REPLACE FUNCTION my_function() ....
    --the whole function definition is wrapped by an advisory lock
    SELECT * FROM my_function();
COMMIT;

那么它似乎避免了这个问题,我使用了大数据(实际上是通过重复循环通过相同的数据伪造它)。请注意,它可能不是做这类事情的最有效方法,但是它确实有效。问题中给出的尝试的问题在于,锁可能只限于定义它的事务,因此函数的创建仍在其范围之外,即未锁定,因此出现了冲突。但是,现在一切似乎都很好。


推荐阅读