#StackBounty: #ubuntu #postgresql #zabbix #zabbix-agent usernamepassword in zabbix agent config file for Postgres monitoring?

Bounty: 100

reading the guide on configuring Agent 2 for Postgres Monitoring with Zabbix

it mentions … "Set the user name and password in host macros ({$PG.USER} and {$PG.PASSWORD}) if you want to override parameters from the Zabbix agent configuration file."

I’m curious how to go about setting the usernamepassword in the zabbix_agent2.conf file so I’m not having to have them stored as macros on the zabbix server?

With Ubuntu at least, it doesn’t create a /var/lib/zabbix directory to put a .pgpass file, so I went ahead and did that, but it doesn’t appear to be picking up those login credentials.

I’d like to get stats on Replication, which requires superuser, so I’m wanting to keep the credentials on the postgres server itself.


Get this bounty!!!

#StackBounty: #python #django #postgresql #iis #hosting How to Host Python, Django, PostgreSQL Application on IIS 10

Bounty: 50

I am trying to host my Django application on Windows 2016 server IIS server. I had used Python, Django, pipenv as virtual environment and PostgreSQL as Database.
How to host is using a Domain Name ?

I had tried almost tried everything available on the internet but till now I am not successful.
Maybe I have not got the perfect tutorials or find the correct one to host the Django Application.

Please help me host the Django Application on IIS 10.

I will be really grateful for the help

Thanks in advance.

Error occurred while reading WSGI handler:

Traceback (most recent call last):
  File "c:userssachin kumar.virtualenvslogin-zru7l_54libsite-packageswfastcgi.py", line 791, in main
    env, handler = read_wsgi_handler(response.physical_path)
  File "c:userssachin kumar.virtualenvslogin-zru7l_54libsite-packageswfastcgi.py", line 633, in read_wsgi_handler
    handler = get_wsgi_handler(os.getenv("WSGI_HANDLER"))
  File "c:userssachin kumar.virtualenvslogin-zru7l_54libsite-packageswfastcgi.py", line 586, in get_wsgi_handler
    raise Exception('WSGI_HANDLER env var must be set')
Exception: WSGI_HANDLER env var must be set

StdOut:

StdErr:


Get this bounty!!!

#StackBounty: #postgresql #query-performance #postgresql-performance #encryption #cpu Postgres SQL CPU and RAM optimisation to improve …

Bounty: 50

I am currently working on an application that requires column-based decryption for a few thousand of rows on a regular basis. The queries are decrypted using pgp_sym_decrypt, where several columns are decrypted for each select.

For a few thousand records, the queries are unfortunately quite slow and I found out that the CPU and the RAM were not quite used. top gives 6.6% CPU usage and 14 GB RAM available out of 16 GB. Therefore, a "standard query" takes around 30s to proceed, while an acceptable performance would be rather around 5 seconds.

I tried changing a few parameters in postgres.conf, but I didn’t get any performance improvement. The version of Postgres is 10.6.

Is there a possibility to increase the hardware usage of Postgres to make the decryption faster?


Get this bounty!!!

#StackBounty: #postgresql #database-design #foreign-key #relational-theory #referential-integrity How to store a (record which holds a)…

Bounty: 100

TL;DR: If the database schema should hold all the business logic, how is it be possible to specify that an attribute type is a reference to a specific attribute, instead of a specific record (as is the case of a foreign key)?

To make an example, let’s suppose I have a table "Discounts" with a column "share" which holds the percentage to be applied to the value of column "cost", "price" or "shipping" of the table "Items".

"Discounts" also holds a foreign key to "item_id".

I need to add another column "base" to table "Discounts" where to store a reference to one of the column of table "Items", and calculate the percentage of the value of that column.
For example, given these values:

Discounts
share    base                 item_id
-------------------------------------
50       (item's cost)        3
25       (item's price)       1
100      (item's shipping)    2


Items
id    cost    price    shipping
-------------------------------
1     10      40       20
2     55      60       30
3     50      85       10

I want to be able to calculate:

  • 50% of 50 (cost of item 3)
  • 25% of 40 (price of item 1)
  • 100% of 30 (shipping of item 2)

The column "base" should contain neither the number (e.g. 3) nor the name (e.g. "price") of the referenced column, because the name or the order of each table could change.
In particular a database doesn’t have any knowledge about the columns (attributes) order or the rows (records/tuplets) order, infact the RDB theory asserts that «the tuples of a relation have no specific order and that the tuples, in turn, impose no order on the attributes.»

Instead if we rely on the column names, we should enforce that each entry holds a valid attribute name, and whenever the attribute name changes, then we must change its records, constraints and app’s validations. If the name is referred in multiple relations, maintaining the database integrity becomes very complex.

The problem here is that we are not writing a reference to the attribute name in the database schema (like when we add a foreign key), but into the data themselves, and this seems a very bad practice, since it threatens the referential integrity.

If there is no DB agnostic way to do this, then assume the database is PostgreSQL (v12+).


Get this bounty!!!

#StackBounty: #postgresql #aws #amazon-rds-aurora Migrate cluster Aurora RDS postgres from 9.X to 10 without downtime

Bounty: 50

Ok, this question is to find some strategy or tool that helps me to migrate from cluster Aurora RDS postgres from 9.6 to 10 without downtime.

What I find until now

  • Aurora RDS is allowed to do major upgrades in-place (cluster aurora don’t)
  • DMS just can run full-load without keep sync from postgres 9.6 source :/ (sync just in the postgres 10)

kinesis and DMS depends in the logical replication slot whose cluster aurora release just in the postgres 10 🙁

In my mind now I have 2 strategies

  • Migrate database one by one with really small downtime. I will need to do backup/restore and redeploy the application because the route53 references there will be two during the migration

  • Create a strategy/application to join DMS + trigger + aws lambdas
    DMS to full load, trigger to register updates and lambda to read the updates and do the data migration to keep data sync between the servers until the update in the endpoint below the route53 (it’s a lot of responsibility to one single dba)

This is what I have know with you know some tool or more interesting strategy please send me 😀

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Upgrading.html


Get this bounty!!!

#StackBounty: #open-source #database #web #postgresql Frontend for CRUD operations Postgres

Bounty: 50

I need a frontend for CRUD Operations on a Postgres database. It has to be understandable also for persons without database background.

Also it would be nice if it is open source and free (and maybe web-based). I’ve tried with LibreOffice Base and with forms, but struggling with the following problem:

I have a "main"-table ‘Person’, a table ‘Hobby’ and a table ‘Friends’, all holding an ID (as PK) and several attributes.

To insert data into ‘Person’, I found the forms very easy to use.
But if I want to add Hobbies and Friends to Persons, the problems begin.

Since I found that in Postgres there is no possibility to store something like an array of foreign keys (see https://stackoverflow.com/questions/34156695/postgres-creating-a-table-with-an-array-of-foreign-keys) I modelled it in the database scheme as e.g. ‘Person_has_Hobby’ relation that holds foreign keys from persons and hobbies.

So manually inserting or editing is not the problem.

How can I get this kind of mapping in an easy useable way (WYSIWYG and maybe webbased, programming language is secondary for the moment)?


Get this bounty!!!

#StackBounty: #postgresql #performance #join #database-design #database-performance What is the best way to join the same table twice i…

Bounty: 50

The performance with the second join on the same table is degraded nearly half

SELECT * FROM party_party_relationship AS ppr 
    LEFT JOIN party_role AS r1 ON r1.party_role_uid = ppr.party_role_uid
    LEFT JOIN party_role AS r2 ON r2.party_role_uid = ppr.party_role_uid_related

Performance with first Join

"Hash Left Join  (cost=288.18..547.72 rows=10972 width=144) (actual time=5.281..17.781 rows=11192 loops=1)"
"  Hash Cond: (ppr.party_role_uid = r1.party_role_uid)"
"  ->  Seq Scan on party_party_relationship ppr  (cost=0.00..230.72 rows=10972 width=98) (actual time=0.020..2.438 rows=11192 loops=1)"
"  ->  Hash  (cost=181.97..181.97 rows=8497 width=46) (actual time=5.186..5.187 rows=9946 loops=1)"
"        Buckets: 16384  Batches: 1  Memory Usage: 823kB"
"        ->  Seq Scan on party_role r1  (cost=0.00..181.97 rows=8497 width=46) (actual time=0.010..2.073 rows=9946 loops=1)"
"Planning Time: 0.472 ms"
"Execution Time: 18.765 ms"

With two join

Performance with the second join on the same table almost doubled

"Hash Left Join  (cost=576.37..864.71 rows=10972 width=190) (actual time=9.871..31.986 rows=11192 loops=1)"
"  Hash Cond: (ppr.party_role_uid_related = r2.party_role_uid)"
"  ->  Hash Left Join  (cost=288.18..547.72 rows=10972 width=144) (actual time=5.163..18.437 rows=11192 loops=1)"
"        Hash Cond: (ppr.party_role_uid = r1.party_role_uid)"
"        ->  Seq Scan on party_party_relationship ppr  (cost=0.00..230.72 rows=10972 width=98) (actual time=0.015..2.735 rows=11192 loops=1)"
"        ->  Hash  (cost=181.97..181.97 rows=8497 width=46) (actual time=5.091..5.092 rows=9946 loops=1)"
"              Buckets: 16384  Batches: 1  Memory Usage: 823kB"
"              ->  Seq Scan on party_role r1  (cost=0.00..181.97 rows=8497 width=46) (actual time=0.008..2.030 rows=9946 loops=1)"
"  ->  Hash  (cost=181.97..181.97 rows=8497 width=46) (actual time=4.644..4.644 rows=9946 loops=1)"
"        Buckets: 16384  Batches: 1  Memory Usage: 823kB"
"        ->  Seq Scan on party_role r2  (cost=0.00..181.97 rows=8497 width=46) (actual time=0.014..1.810 rows=9946 loops=1)"
"Planning Time: 0.925 ms"
"Execution Time: 32.920 ms"

With one join

The above query is just a part of the whole query.

SELECT * FROM party_party_relationship AS ppr 
    INNER JOIN party_role AS r1 ON r1.party_role_uid = ppr.party_role_uid
        INNER JOIN party AS p1 ON p1.party_uid = r1.party_uid
                LEFT JOIN party_name AS n1 ON n1.party_uid = p1.party_uid AND n1.end_date IS NULL
                LEFT JOIN business_number AS b1 ON b1.party_uid = p1.party_uid AND b1.business_number_cd = p1.business_number_cd AND b1.end_date IS NULL

    INNER JOIN party_role AS r2 ON r2.party_role_uid = ppr.party_role_uid_related
        INNER JOIN party AS p2 ON p2.party_uid = r2.party_uid
                LEFT JOIN party_name AS n2 ON n2.party_uid = p2.party_uid AND n2.end_date IS NULL
                LEFT JOIN business_number AS b2 ON b2.party_uid = p2.party_uid AND b2.business_number_cd = p2.business_number_cd AND b2.end_date IS NULL
                
                WHERE ppr.case_uid = 9

Execution Plan

"Nested Loop Left Join  (cost=1113.46..3576.37 rows=915 width=772) (actual time=19.687..76.911 rows=919 loops=1)"
"  ->  Nested Loop Left Join  (cost=1113.31..3270.33 rows=915 width=694) (actual time=19.616..56.253 rows=919 loops=1)"
"        Join Filter: (n1.end_date IS NULL)"
"        ->  Hash Left Join  (cost=1113.03..2415.51 rows=915 width=547) (actual time=19.588..51.236 rows=915 loops=1)"
"              Hash Cond: (r1.party_uid = p2.party_uid)"
"              ->  Hash Left Join  (cost=856.60..2156.68 rows=915 width=481) (actual time=15.192..45.391 rows=915 loops=1)"
"                    Hash Cond: (ppr.party_role_uid_related = r2.party_role_uid)"
"                    ->  Nested Loop Left Join  (cost=568.42..1866.09 rows=915 width=435) (actual time=9.743..38.415 rows=915 loops=1)"
"                          ->  Nested Loop Left Join  (cost=568.27..1560.05 rows=915 width=357) (actual time=9.665..17.956 rows=915 loops=1)"
"                                ->  Hash Left Join  (cost=567.99..705.23 rows=915 width=210) (actual time=9.639..12.460 rows=915 loops=1)"
"                                      Hash Cond: (r1.party_uid = p1.party_uid)"
"                                      ->  Hash Left Join  (cost=311.56..446.40 rows=915 width=144) (actual time=5.314..7.056 rows=915 loops=1)"
"                                            Hash Cond: (ppr.party_role_uid = r1.party_role_uid)"
"                                            ->  Bitmap Heap Scan on party_party_relationship ppr  (cost=23.38..155.81 rows=915 width=98) (actual time=0.111..0.536 rows=915 loops=1)"
"                                                  Recheck Cond: (insolvency_case_uid = 9)"
"                                                  Heap Blocks: exact=18"
"                                                  ->  Bitmap Index Scan on ixfk_party_party_relationship_insolvency_case  (cost=0.00..23.15 rows=915 width=0) (actual time=0.097..0.097 rows=926 loops=1)"
"                                                        Index Cond: (insolvency_case_uid = 9)"
"                                            ->  Hash  (cost=181.97..181.97 rows=8497 width=46) (actual time=5.149..5.149 rows=9960 loops=1)"
"                                                  Buckets: 16384  Batches: 1  Memory Usage: 824kB"
"                                                  ->  Seq Scan on party_role r1  (cost=0.00..181.97 rows=8497 width=46) (actual time=0.009..1.979 rows=9960 loops=1)"
"                                      ->  Hash  (cost=161.19..161.19 rows=7619 width=66) (actual time=4.290..4.290 rows=7449 loops=1)"
"                                            Buckets: 8192  Batches: 1  Memory Usage: 701kB"
"                                            ->  Seq Scan on party p1  (cost=0.00..161.19 rows=7619 width=66) (actual time=0.013..1.680 rows=7449 loops=1)"
"                                ->  Index Scan using ixfk_party_name_party on party_name n1  (cost=0.28..0.92 rows=1 width=147) (actual time=0.004..0.005 rows=1 loops=915)"
"                                      Index Cond: (party_uid = p1.party_uid)"
"                                      Filter: (end_date IS NULL)"
"                                      Rows Removed by Filter: 0"
"                          ->  Index Scan using ex_business_number_end_date on business_number b1  (cost=0.15..0.32 rows=1 width=78) (actual time=0.020..0.021 rows=1 loops=915)"
"                                Index Cond: ((party_uid = p1.party_uid) AND (business_number_cd = p1.business_number_cd))"
"                    ->  Hash  (cost=181.97..181.97 rows=8497 width=46) (actual time=5.293..5.293 rows=9960 loops=1)"
"                          Buckets: 16384  Batches: 1  Memory Usage: 824kB"
"                          ->  Seq Scan on party_role r2  (cost=0.00..181.97 rows=8497 width=46) (actual time=0.010..1.799 rows=9960 loops=1)"
"              ->  Hash  (cost=161.19..161.19 rows=7619 width=66) (actual time=4.313..4.314 rows=7449 loops=1)"
"                    Buckets: 8192  Batches: 1  Memory Usage: 701kB"
"                    ->  Seq Scan on party p2  (cost=0.00..161.19 rows=7619 width=66) (actual time=0.011..1.587 rows=7449 loops=1)"
"        ->  Index Scan using ixfk_party_name_party on party_name n2  (cost=0.28..0.92 rows=1 width=147) (actual time=0.003..0.003 rows=1 loops=915)"
"              Index Cond: (party_uid = p2.party_uid)"
"  ->  Index Scan using ex_business_number_end_date on business_number b2  (cost=0.15..0.32 rows=1 width=78) (actual time=0.020..0.020 rows=1 loops=919)"
"        Index Cond: ((party_uid = p2.party_uid) AND (business_number_cd = p2.business_number_cd))"
"Planning Time: 4.499 ms"
"Execution Time: 77.433 ms"

Plan in Graph

Part of Execution plan - graph

Is there any better way to do it? The table is expected to grow very fast.


Get this bounty!!!

#StackBounty: #postgresql Cannot use "ON CONFLICT" with postgres updatable view and partial index

Bounty: 50

I have an updatable view pointing to an underlying table that has a partial index. It looks something like this

CREATE TABLE if not exists foo (
    a INT,
    b INT,
    x INT,
    y INT,
    z BOOLEAN,
    CONSTRAINT x_or_y CHECK (
      (z and x is not null and y is null)
      or 
      (not z and x is null and y is not null)
    )
);
CREATE UNIQUE INDEX ux ON foo (x, a) WHERE z=TRUE;
CREATE UNIQUE INDEX uy ON foo (y, a) WHERE z=FALSE;
CREATE OR REPLACE VIEW  foo_view AS 
    SELECT * FROM foo;

That is, for each row, y must be null if z is true; x must be null if z is false; and, only one of x and y may be not null. (x, a) and (y, a) must be unique. (Sorry if this is overly complicated. I’m translating from my real table that has a lot of other cruft.)

My problem comes when I want to update with ON CONFLICT. I believe I ought to be able to do this.

INSERT INTO foo_view(x, y, a, b, z)
  VALUES
  (5, null, 1, 2, true),
  (null, 5, 1, 2, false);
  

  
select * from foo_view;

INSERT INTO foo_view(x, y, a, b, z)
  VALUES
  (5, null, 1, 2, true)
ON CONFLICT (x, a) where z=true
  DO UPDATE
  set b = EXCLUDED.b;

But, I get the exception:

ERROR:  there is no unique or exclusion constraint matching the ON CONFLICT specification

I can insert into foo instead of foo_view with the same ON CONFLICT without error.

Here is a fiddle: https://www.db-fiddle.com/f/cX2HXg91Q7yKoPeMBYzVLg/0


Get this bounty!!!

#StackBounty: #postgresql #google-cloud-sql Postgresql sudden storage increase at regular interval?

Bounty: 50

I am running Postgresql 11 in production and observing a weird step function kind of thing in storage as follow:
enter image description here

The above graph is for a single day and it is happening everyday at random times. I have checked and there are no bulk inserts from our side. Also, these steps are weird because storage again comes back to its normal value after some time.

I also have read replica and because of this, there is sudden increase in disk usage of read replica which results in high replication times of around 30s, which in further results in conflict with recovery problem. Is there a way to deal with this? Is this common behaviour of Postgres. I am running postgresql on Google Cloud SQL and it has been happening for a long time but I observed today, so I am wondering is it a normal thing or do I need to check something?

One suspicion I have is that it might be because of vaccumm but I am not sure? Please help me with this.

Edit 1:

There are spikes like below whenever this occur for read/write io. Red is the write one and blue is the read one

enter image description here


Get this bounty!!!