#StackBounty: #locking #insert #transaction #concurrency How to ensure no duplicate records are added when using multiple Node JS proce…

Bounty: 50

We are working on an appointment scheduler. We have a table Appointment that stores an appointment with starttime, endtime, assigneeId and eventId.

We are using Bookshelf.js as an ORM with Node JS. The problem we are facing is if we send parallel thread requests using JMeter for the same combination of starttime, endtime, assigneeId, eventId and our app ends up booking an appointment for both the threads which is incorrect.

Only one of them should succeed and other one should fail. How do we go about solving this problem?

CREATE TABLE `Appointment` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `vendor_Id` int(10) unsigned NOT NULL,
  `account_Id` int(10) unsigned NOT NULL,
  `event_Id` int(10) unsigned NOT NULL,
  `assignee_Id` int(10) unsigned NOT NULL,
  `starttime` datetime NOT NULL,
  `endtime` datetime NOT NULL,
  `book` tinyint(4) DEFAULT '0',
  `code` varchar(20) DEFAULT NULL,
  `deleted_at` datetime DEFAULT NULL,
  `deleted_By_Id` int(10) unsigned DEFAULT NULL,
  `updated_Appointment_At` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `created_Appointment_At` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `appointments_code_unique` (`code`)
) ENGINE=InnoDB AUTO_INCREMENT=106 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

What we do is before inserting we see if any overlapping appointments exist in the DB. Iif not, then we add it but with multiple Node JS processes: both of the processes do not find any appointment, and both end up inserting.

First Solution : INSERT IGNORE but it’s difficult in our case to define a Unique key based on starttime and endtime as we need to check for overlapping values as well.

Second Solution: Let both of them insert the record in the Appointment table and after the transaction is committed we check for overlap – if we find an overlap we delete the record which was added later. This will result in a waste of computation as well as ids.

Third Solution: Use an App Level Queue to manage all requests to the Appointment table and only 1 process will resolve all Queue messages. I wonder how this approach will scale.

How do I ensure at the time of insert that only one thread can book an appointment safely? Will a stored procedure be helpful? We are using transactions provided by Bookshelf.js


Get this bounty!!!

#StackBounty: #insert #postgresql-performance Postgresql sudden slow down on INSERTs

Bounty: 50

I have a very odd situation with one of my Postgres databases. The server itself is quite powerful, it’s 24 SSD NAS divided into 2 LUNs of 12 drives each – a very good IOPS setup. It is connected via 10g network card to the 128-threaded server with 256 GB of RAM, running on Win 2019.

During the day, randomly, I am affected by the sudden slowdown of transaction inserts, a query like this:

INSERT INTO public.transactions(  col1,  col2,  col3, col4,  col5,  col6,  col7,  col8,  col9,  col10,  col11, col12)  
VALUES (  $1, $2, $3, $4,  $5, $6, $7, $8, $9, $10, $11, $12) 
RETURNING id1

Under normal circumstances, time is close to 0. In the period of problems, 2-3 mins, it can be between 5 and 20 secs, destroying the transaction system. Then suddenly it recovers and works super stable. CPU, RAM, disk queues are not a problem. I don’t spot in the logs any significant slow down of other queries, no vacuuming taking place. I have even installed some pg extensions to get execution plans of queries long than x ms, but in case of INSERT you can only get:

Insert on transactions(cost=0.00..0.01 rows=1 width=313)

Lock monitoring only shows that the same queries are blocking each other, but it’s quite odd as they are independent.

I don’t expect a solution to my problem, as I know it’s impossible, but maybe some more hints how I can investigate such issue? I think I tried every approach but still feel blind and lost when it happens.


Get this bounty!!!