#StackBounty: #postgresql #query-performance #postgresql-performance #encryption #cpu Postgres SQL CPU and RAM optimisation to improve …

Bounty: 50

I am currently working on an application that requires column-based decryption for a few thousand of rows on a regular basis. The queries are decrypted using pgp_sym_decrypt, where several columns are decrypted for each select.

For a few thousand records, the queries are unfortunately quite slow and I found out that the CPU and the RAM were not quite used. top gives 6.6% CPU usage and 14 GB RAM available out of 16 GB. Therefore, a "standard query" takes around 30s to proceed, while an acceptable performance would be rather around 5 seconds.

I tried changing a few parameters in postgres.conf, but I didn’t get any performance improvement. The version of Postgres is 10.6.

Is there a possibility to increase the hardware usage of Postgres to make the decryption faster?


Get this bounty!!!

#StackBounty: #query-performance #postgresql-performance #encryption #psql #cpu Postgres SQL CPU and RAM optimisation to improve decryp…

Bounty: 50

I am currently working on an application that requires column-based decryption for a few thousand of rows on a regular basis. The queries are decrypted using pgp_sym_decrypt, where several columns are decrypted for each select.

For a few thousand records, the queries are unfortunately quite slow and I found out that the CPU and the RAM were not quite used. top gives 6.6% CPU usage and 14 GB RAM available out of 16 GB. Therefore, a "standard query" takes around 30s to proceed, while an acceptable performance would be rather around 5 seconds.

I tried changing a few parameters in postgres.conf, but I didn’t get any performance improvement. The version of Postgres is 10.6.

Is there a possibility to increase the hardware usage of Postgres to make the decryption faster?


Get this bounty!!!

#StackBounty: #mysql #query-performance #join #mariadb Slow performance when joining a small table and filtering out on a non-key colum…

Bounty: 100

I am fairly new to MariaDB and I am struggling with one issue that I cannot get to the bottom of it. This is the query:

SELECT SQL_NO_CACHE STRAIGHT_JOIN
    `c`.`Name` AS `CategoryName`, 
    `c`.`UrlSlug` AS `CategorySlug`, 
    `n`.`Description`, 
    IF(n.OriginalImageUrl IS NOT NULL, n.OriginalImageUrl, s.LogoUrl) AS `ImageUrl`, 
    `n`.`Link`, 
    `n`.`PublishedOn`, 
    `s`.`Name` AS `SourceName`, 
    `s`.`Url` AS `SourceWebsite`, 
   s.UrlSlug AS SourceUrlSlug,
    `n`.`Title`
FROM `NewsItems` AS `n`
INNER JOIN `NewsSources` AS `s` ON `n`.`NewsSourceId` = `s`.`Id`
LEFT JOIN `Categories` AS `c` ON `n`.`CategoryId` = `c`.`CategoryId`
WHERE s.UrlSlug = 'slug'
#WHERE s.Id = 52
ORDER BY `n`.`PublishedOn` DESC
LIMIT 50

NewsSources is a table with about 40 rows and NewsItems has ~1 million. Each news item belongs to one source and one source can have many items. I’m trying to get all items for a source identified by URL slug of the source.

  1. In case when I use STRAIGHT_JOIN and when I query for a source that has lots of news items, the query returns immediately.
    However, if I query for a source that has low number of items (~100) OR if I query for a URL slug that doesn’t belong to any source (result set is 0 rows), the query runs for 12 seconds.

  2. In case when I remove STRAIGHT_JOIN, I see the opposite performance from the first case – it runs really slow when I query for a news source with many items and returns immediately for sources with low number of items or result set is 0, because the URL slug doesn’t belong to any news source.

  3. In case when I query by news source ID (the commented out WHERE s.Id = 52), the result comes immediately, regardless of whether there are lots of items for that source or 0 items for that source.

I want to point out again that the NewsSources table contains only about 40 rows.

Here is the analyzer results for the query above: Explain Analyzer

What can I do to make this query to run fast always?

Here are tables and indexes definitions:

-- --------------------------------------------------------
-- Server version:               10.4.13-MariaDB-1:10.4.13+maria~bionic - mariadb.org binary distribution
-- Server OS:                    debian-linux-gnu
-- --------------------------------------------------------

-- Dumping structure for table Categories
CREATE TABLE IF NOT EXISTS `Categories` (
  `CategoryId` int(11) NOT NULL AUTO_INCREMENT,
  `Name` varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL,
  `Description` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
  `UrlSlug` varchar(30) COLLATE utf8mb4_unicode_ci NOT NULL,
  `CreatedOn` datetime(6) NOT NULL,
  `ModifiedOn` datetime(6) NOT NULL,
  PRIMARY KEY (`CategoryId`)
) ENGINE=InnoDB AUTO_INCREMENT=16 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;


-- Dumping structure for table NewsItems
CREATE TABLE IF NOT EXISTS `NewsItems` (
  `Id` bigint(20) NOT NULL AUTO_INCREMENT,
  `NewsSourceId` int(11) NOT NULL,
  `Title` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
  `Link` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
  `Description` longtext COLLATE utf8mb4_unicode_ci DEFAULT NULL,
  `PublishedOn` datetime(6) NOT NULL,
  `GlobalId` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
  `CategoryId` int(11) DEFAULT NULL,
  PRIMARY KEY (`Id`),
  KEY `IX_NewsItems_CategoryId` (`CategoryId`),
  KEY `IX_NewsItems_NewsSourceId_GlobalId` (`NewsSourceId`,`GlobalId`),
  KEY `IX_NewsItems_PublishedOn` (`PublishedOn`),
  KEY `IX_NewsItems_NewsSourceId` (`NewsSourceId`),
  FULLTEXT KEY `Title` (`Title`,`Description`),
  CONSTRAINT `FK_NewsItems_Categories_CategoryId` FOREIGN KEY (`CategoryId`) REFERENCES `Categories` (`CategoryId`),
  CONSTRAINT `FK_NewsItems_NewsSources_NewsSourceId` FOREIGN KEY (`NewsSourceId`) REFERENCES `NewsSources` (`Id`)
) ENGINE=InnoDB AUTO_INCREMENT=649802 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;


-- Dumping structure for table NewsSources
CREATE TABLE IF NOT EXISTS `NewsSources` (
  `Id` int(11) NOT NULL AUTO_INCREMENT,
  `Name` varchar(500) COLLATE utf8mb4_unicode_ci NOT NULL,
  `Url` varchar(500) COLLATE utf8mb4_unicode_ci NOT NULL,
  `UrlSlug` varchar(50) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
  `LogoUrl` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL
  PRIMARY KEY (`Id`)
) ENGINE=InnoDB AUTO_INCREMENT=55 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;


Get this bounty!!!

#StackBounty: #sql-server #query-performance #sql-server-2014 #optimization #cardinality-estimates Changing database compatibility from…

Bounty: 50

Just seeking an expert/practical advise from DBA point of view where one of our application DB running on SQL 2014 after migration had old DB compatibility level i.e 100.(SQL2008)

From DEV point of view all the testing has been done and they dont see much diff and want to move to prod based on their testing.

In our testing ,For certain process where we see slowness like in SP’s we found the part of statement that was slow and added query traceon hint , something like below keeping compat to 120, which helps keeping performance stable

SELECT  [AddressID],
    [AddressLine1],
    [AddressLine2]
FROM Person.[Address]
WHERE [StateProvinceID] = 9 AND
    [City] = 'Burbank'
OPTION (QUERYTRACEON 9481);
GO

UPDATE- Editing question based on more findings-

Actually we found things getting worst for a table which calls scalar function within a computed column-

below is how that column looks

CATCH_WAY AS ([dbo].[fn_functionf1]([Col1])) PERSISTED NOT NULL

and part of query where it goes weird is somewhat looking like below

DELETE t2
   OUTPUT del.col1
          del.col2
          del.col3
   INTo #temp1
FROM #temp2 t2
INNER JOIN dbo.table1 tb1 on tb1.CATCH_WAY = ([dbo].[fn_functionf1](t2.[Col1])
AND t2.[col2] = tb1.[col2]
AND t3.[col3] = tb1.[col3]
AND ISNULL (t2.[col4],'') = ISNULL (tb1.[col4],'')

I know function is being called and is slow but the problem is with current compat i.e. 100 runs OK’ish slow but when changed to 120 it gets X100 times slow
What is happening ?


Get this bounty!!!

#StackBounty: #query-performance #optimization #execution-plan #sql-server-2019 Why are [Seemingly] suitable indexes not used on a LEFT…

Bounty: 150

I have the following [fairly meaningless, just for the purpose of demonstration] query in the StackOverflow database:

SELECT  *
FROM    Users u
        LEFT JOIN Comments c
            ON u.Id = c.UserId OR
               u.Id = c.PostId
WHERE   u.DisplayName = 'alex'

The only index on the Users table is a clustered index on ID.

The Comments table has the following Non-Clustered indexes as well as Clustered Index on ID:

CREATE INDEX IX_UserID ON Comments
(
    UserID,
    PostID
)

CREATE INDEX IX_PostID ON Comments
(
    PostID,
    UserID
)

The estimated plan for the query is here:

I can see the first thing the optimizer will do is perform a CI scan on the users table to filter only those users where DisplayName = Alex, effectively doing this:

SELECT  *
FROM    Users u
WHERE   u.DisplayName = 'alex'
ORDER BY Id

and retreiving results as such:

enter image description here

Then it will scan the comments CI and for every row, look to see if the row satisfies the predicate

u.Id = c.UserId OR u.Id = c.PostId

Despite the two indexes, this CI scan is performed.

Wouldn’t it be more efficient if the optimizer did a separate seek on each of the indexes in the customers table above and join them together?

If I visualise what that would look like, in the screenshot above we can see the first result of the Users CI scan is ID 420

I can visualize what the IX_UserID Index looks like using

SELECT      UserID,
            PostID
FROM        Comments
ORDER BY    UserID,
            PostID

so if I seek to the rows for user ID 420 as an index seek would:

enter image description here

for every row where UserID = 420, I can look if u.Id = c.UserId OR u.Id = c.PostId of, course they all match the u.Id = c.UserId part of our predicate,

So for the second part of our index seek, we can seek through our index IX_PostID which can be visualised as follows:

SELECT      PostID,
            UserID
FROM        Comments
ORDER BY    PostID,
            UserID 

If I seek to Post ID 420 I can see nothing is there:

enter image description here

So we then go back to the results of the CI scan, move to the next row (userId 447) and repeat the process.

The behaviour I have described above is possible using in a WHERE clause:

SELECT      UserID,
            PostID
FROM        Comments
WHERE       UserID = 420 OR PostID = 420
ORDER BY    UserID,
            PostID

Plan here

My question therefore is, why isn’t an OR condition in a JOIN clause able to perform an index seek on appropriate indexes?


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2008-r2 #query-performance #xquery Optimize/Speedup query

Bounty: 50

Below query is used for inserting and updating the tables in the SQL Server database. The XQuery is slow while executing in SSMS for first time.

Query

insert new <ROW>

Update BalanceTable  set [daily_balance].modify('insert <Row><date>2007-05-10</date><Balance>-8528</Balance><Transactiondr>835</Transactiondr><Transactioncr>9363</Transactioncr><Rowid>2</Rowid></Row>  as first into (/Root)[1]') where [daily_balance].exist('/Root/Row[date=''2007-05-10''] ')=0 and [daily_balance].exist('/Root')=1 and  [AccountID]=61 and [Date] = '31-May-2007';   

modify balance

Update BalanceTable set   [daily_balance].modify('replace value of   (/Root/Row[date=''2007-05-10'']/Balance/text())[1] with   (/Root/Row[date=''2007-05-10'']/ Balance)[1] -3510')   where   [AccountID]=577 and [Date]='31-May-2007'  and  [daily_balance].exist('/Root/Row[date=''2007-05-10'']')=1;

modify transactioncr

Update BalanceTable set   [daily_balance].modify('replace value of   (/Root/Row[date=''2007-05-10'']/Transactioncr/text())[1] with   (/Root/Row[date=''2007-05-10'']/ Transactioncr)[1] +3510')   where   [AccountID]=577 and [Date]='31-May-2007'  and  [daily_balance].exist('/Root/Row[date=''2007-05-10'']')=1;

Table schema

USE [Fitness Te WM16]                       
GO                                                              
SET ANSI_NULLS ON                       
GO                                              
SET QUOTED_IDENTIFIER ON                        
GO                                              
SET ANSI_PADDING ON                     
GO                                              
CREATE TABLE [dbo].[BalanceTable](                      
    [AccountID] [int] NULL,                 
    [Type] [char](10) NULL,                 
    [Date] [date] NULL,                 
    [Balance] [decimal](15, 2) NULL,                    
    [TRansactionDr] [decimal](15, 2) NULL,                  
    [TRansactionCr] [decimal](15, 2) NULL,                  
    [daily_Balance] [xml] NULL,                 
    [AutoIndex] [int] IDENTITY(1,1) NOT NULL,                   
 CONSTRAINT [PK_BalanceTable] PRIMARY KEY CLUSTERED                         
(                       
    [AutoIndex] ASC                 
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]                       
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]                                               
GO                                              
SET ANSI_PADDING OFF                        
GO  

Execution plan

The execution plan is attached here sql execution plan

Sample data

The sample XML data for reference is given below.

<Root>              
      <Row>             
        <date>2007-05-31</date>             
        <Balance>-47718</Balance>               
        <Transactiondr>0</Transactiondr>                
        <Transactioncr>47718</Transactioncr>                
        <Rowid>7</Rowid>                
      </Row>                
      <Row>             
        <date>2007-05-29</date>             
        <Balance>-31272</Balance>               
        <Transactiondr>0</Transactiondr>                
        <Transactioncr>31272</Transactioncr>                
        <Rowid>6</Rowid>                
      </Row>                
      <Row>             
        <date>2007-05-18</date>             
        <Balance>-48234</Balance>               
        <Transactiondr>0</Transactiondr>                
        <Transactioncr>48234</Transactioncr>                
        <Rowid>5</Rowid>                
      </Row>                
      <Row>             
        <date>2007-05-11</date>             
        <Balance>-42120</Balance>               
        <Transactiondr>0</Transactiondr>                
        <Transactioncr>42120</Transactioncr>                
        <Rowid>4</Rowid>                
      </Row>                
      <Row>             
        <date>2007-05-10</date>             
        <Balance>-21060</Balance>               
        <Transactiondr>0</Transactiondr>                
        <Transactioncr>21060</Transactioncr>                
        <Rowid>3</Rowid>                
      </Row>                
      <Row>             
        <date>2007-05-08</date>             
        <Balance>-10530</Balance>               
        <Transactiondr>0</Transactiondr>                
        <Transactioncr>10530</Transactioncr>                
        <Rowid>2</Rowid>                
      </Row>                
      <Row>             
        <date>2007-05-04</date>             
        <Balance>-21060</Balance>               
        <Transactiondr>0</Transactiondr>                
        <Transactioncr>21060</Transactioncr>                
        <Rowid>1</Rowid>                
      </Row>                
      <Maxrowid>7</Maxrowid>                
    </Root> 

Question

I am using SQL Server 2008 R2. The total time taken for 500 queries is 20 to 40 seconds. How can I optimise this query to speed up the execution?


Get this bounty!!!

#StackBounty: #mysql #query-performance "Change user" command slow query

Bounty: 50

we have a backend server using ASP Core 3.0, and we are using pomelo library to make a connection to our MySQL database.

every now and then the server starts to fire a LOT of sql exceptions saying the all connections in the pool are in use.

we have a script to analyze slow queries and it’s filled with this:

# administrator command: Change user;
User@Host: username[username] @ [x.x.x.x]
Query_time: 8.998701 Lock_time: 0.000000 Rows_sent: 1 Rows_examined:436537
SET timestamp=1576705430

just query after query depleting all the connections in the pool

I don’t know where this query is coming from, why does it show for a while then stops showing

if I use SHOW PROCESSLIST; command I see many with no databse selected and the query is “Change user”!

is this even a valid comand ?

almost everything is left to default for pool settings:

  • 100 connections max.
  • changed command timeout to 120 seconds
  • enable retry on failure (max count 5), tried with this setting off
    as well


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2014 #query-performance #optimization #full-text-search Can oscillating number of fragment decrea…

Bounty: 100

I’ve got a FTS Catalog on a table containing ~106 row. It used to work like a charm, but lately, for an unknown reason it started to get very bad (query >30 sec) performances randomly.

Reading Guidelines for full-text index maintenance I dig around sys.fulltext_index_fragments and noticed the following:

  1. The number of fragment oscillate between 2 and 20 at high frequency
    (multiple times per minutes);
  2. The biggest fragment contains ~105 rows and is 11Mb;
  3. The others contains from 1 to 40 rows.

Can this oscillating number of fragment mess up SQL Server’s execution plan selection?

What can I do to clean up this?


Get this bounty!!!

#StackBounty: #mysql #performance #query-performance #performance-tuning same query gets slower after other queries get called (using a…

Bounty: 50

I have a query that, when i run it first after i restart mysql, takes 2 seconds, but when I run a sequence of other queries before it (this query belongs to a procedure between other procedures) suddenly the query takes around 2 minutes, If i restart mysql and rerun it it takes again 2 seconds.

The durations are almost the same (~2 seconds and ~ 2 minutes) so even when it slows down it’s not random, and it’s specifically this query that gets slower (or maybe this table), everything else is normal.

  • Disabled query_cache
  • increased innodb_buffer_pool_size (I assumed it had to do with a memory or a buffer getting full making the next query slower)
  • checked lock tables

I don’t know what else (except the data) can a query affect another query/table after it.

Is there any path to search for or something to try, I don’t have any lead on what to search for.


I’m using MySQL version 5.7

the explain plan is

enter image description here

I actually found out that the issue is not with a specific table, but simply too many prepared statements

+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| Com_prepare_sql    | 60    |
| Com_stmt_prepare   | 82    |
| Com_xa_prepare     | 0     |
| Com_stmt_reprepare | 24    |
+--------------------+-------+

It is mentionned in the doc that
A prepared statement is also global to the session. If you create a prepared statement within a stored routine, it is not deallocated when the stored routine ends.

I thought this meant that they should be deallocated explicitely, but apparently this is not the case.


Get this bounty!!!

#StackBounty: #mysql #performance #query-performance #performance-tuning same query gets slower after other queries get called

Bounty: 50

I have a query that, when i run it first after i restart mysql, takes 2 seconds, but when I run a sequence of other queries before it (this query belongs to a procedure between other procedures) suddenly the query takes around 2 minutes, If i restart mysql and rerun it it takes again 2 seconds.

The durations are almost the same (~2 seconds and ~ 2 minutes) so even when it slows down it’s not random, and it’s specifically this query that gets slower (or maybe this table), everything else is normal.

  • Disabled query_cache
  • increased innodb_buffer_pool_size (I assumed it had to do with a memory or a buffer getting full making the next query slower)
  • checked lock tables

I don’t know what else (except the data) can a query affect another query/table after it.

Is there any path to search for or something to try, I don’t have any lead on what to search for.


I’m using MySQL version 5.7

the explain plan is

enter image description here

and the table creation is

 create table s__table_0
(
    id int auto_increment
        primary key,
    year int not null,
    nb_patients double not null,
    disease_id varchar(255) not null,
    facility_id int not null,
    constraint s__table_0_ibfk_1
        foreign key (facility_id) references facility (id)
            on update cascade,
    constraint s__table_0_ibfk_2
        foreign key (disease_id) references disease (code)
            on update cascade
);

create index fdcbf_disease_id_facility_id_index
    on s__table_0 (disease_id, facility_id);

create index fk_facility
    on s__table_0 (facility_id);

create index forecast_de_disease_c35850_idx
    on s__table_0 (disease_id);

The queries before it are independant from it, but there are a lot of inserts happening.

This is the show profile result

+---------------------------+------------+
| Status                    | Duration   |
+---------------------------+------------+
| continuing inside routine |   0.000040 |
| checking permissions      |   0.000011 |
| checking permissions      |   0.000006 |
| checking permissions      |   0.000005 |
| checking permissions      |   0.000006 |
| checking permissions      |   0.000006 |
| checking permissions      |   0.000007 |
| checking permissions      |   0.000009 |
| Opening tables            |   0.000074 |
| init                      |   0.000121 |
| System lock               |   0.000023 |
| optimizing                |   0.000057 |
| statistics                |   0.000302 |
| preparing                 |   0.000094 |
| Creating tmp table        |   0.000067 |
| Sorting result            |   0.000018 |
| executing                 |   0.000008 |
| Sending data              | 107.846938 |
| Creating sort index       |   0.806147 |
| end                       |   0.000012 |
| query end                 |   0.014659 |
| removing tmp table        |   0.000031 |
| query end                 |   0.000004 |
| closing tables            |   0.000038 |
| query end                 |   0.000004 |
| closing tables            |   0.000009 |
+---------------------------+------------+

And same query after a while, another session or restart

+----------------------+----------+
| Status               | Duration |
+----------------------+----------+
| starting             | 0.000128 |
| checking permissions | 0.000005 |
| checking permissions | 0.000003 |
| checking permissions | 0.000003 |
| checking permissions | 0.000003 |
| checking permissions | 0.000003 |
| checking permissions | 0.000008 |
| Opening tables       | 0.000029 |
| init                 | 0.000063 |
| System lock          | 0.000010 |
| optimizing           | 0.000019 |
| statistics           | 0.000120 |
| preparing            | 0.000034 |
| Creating tmp table   | 0.000024 |
| Sorting result       | 0.000007 |
| executing            | 0.000004 |
| Sending data         | 0.506220 |
| Creating sort index  | 0.079732 |
| end                  | 0.000013 |
| query end            | 0.000013 |
| removing tmp table   | 0.000040 |
| query end            | 0.000004 |
| closing tables       | 0.000012 |
| freeing items        | 0.000028 |
| cleaning up          | 0.000016 |


Get this bounty!!!