#StackBounty: #postgresql #spring-data-jpa #jsonb how to exclude JSON-fields in a custom jsonb query with CriteriaBuilder?

Bounty: 100

I am trying to construct a query with CriteriaBuilder for this json:

{
a: value1,
b: vsalue2,
c: value3,
d: value4
}

to match native query:

select field_with_json -c -d from table1 where field_with_json->>a = 'someValue' and field_with_json->>b = 'someValue'

I’m struggling to find the way of how to implement the '-c -d' part, that is, deleting certain key/value pairs from the end result in CriteriaBuilder. Is it possible at all?


Get this bounty!!!

#StackBounty: #ruby-on-rails #postgresql #activerecord #ruby-on-rails-5 Query in ActiveRecord for objects that contain one or more ids …

Bounty: 100

I have Rails 5.2 project with three models:

class Post
  has_many :post_tags
  has_many :tags, through: :post_tags
end

class PostTags
  belongs_to :post
  belongs_to :tag
end

class Tags
  has_many :post_tags
  has_many :posts, through: :post_tags
end

I have the a number of arrays of tag ids, e.g:

array_1 = [3, 4, 5]
array_2 = [5, 6, 8]
array_3 = [9, 11, 13]

I want a query that will return posts that are tagged with at least one tag with an id from each of the arrays.

For instance, imagine I have a post with the following tag ids:

> post = Post.find(1)
> post.tag_ids
> [4, 8]

If I ran the query with array_1 and array_2 it would return this post. However if I ran it with array_1, array_2 and array_3 it would not return this post.

I attempted this with the following query:

Post.joins(:tags).where('tags.id IN (?) AND tags.id IN (?)', array_1, array_2)

But this does not return the post.

What should the query be to return the post?

Any help would be greatly appreciated!


Get this bounty!!!

#StackBounty: #postgresql PgSql jsonb_set for all occurance of multiple keys in json array

Bounty: 50

I have below json

[
    {
        "rows": [
            {
                "col_class": "col50",
                "col_sec_id": 1626361165906,
                "col_sec_json": null
            },
            {
                "col_class": "col50",
                "col_sec_id": 1626361165907,
                "col_sec_json": {
                    "id": 1626361165907,
                    "data": {
                        "class": "0",
                         "location": "0",
                        "unitForCurrent": ""
                    },
                    "theme": "defaultTheme",
                    "layout": {
                        "fontSize": 14,
                        "fontStyle": "Open Sans",
                        "textColor": "#545454",
                        "isHeadingAlignmentInherited": true
                    },
                    "org_id": 1,
                    "to_date": "2020-12-31",
                    "interval": "Yearly"
                  }
            }
                   
        ],
        "col_sec_id": 1626360791978,
        "row_cols_count": 2
    },
 {
        "rows": [
            {
                "col_class": "col50",
                "col_sec_id": 1626361165906,
                "col_sec_json": null
            },
            {
                "col_class": "col50",
                "col_sec_id": 1626361165907,
                "col_sec_json": {
                    "id": 1626361165907,
                    "data": {
                        "class": "0",
                         "location": "0",
                        "unitForCurrent": ""
                    },
                    "theme": "defaultTheme",
                    "layout": {
                        "fontSize": 14,
                        "fontStyle": "Open Sans",
                        "textColor": "#545454",
                        "isHeadingAlignmentInherited": true
                    },
                    "org_id": 1,
                    "to_date": "2020-12-31",
                    "interval": "Yearly"
                  }
            }
                   
        ],
        "col_sec_id": 1626360791978,
        "row_cols_count": 2
    }

]

can update occurrence of a key like below

UPDATE YourTable t
SET jdata = 
    (SELECT jsonb_agg(
        jsonb_set(j1.value, '{rows}',
            (
             SELECT jsonb_agg(jsonb_set(j2.value, '{col_sec_json,data,class}', '"new_value"'))
             FROM jsonb_array_elements(j1.value->'rows') j2
            )
        ))
     FROM jsonb_array_elements(t.jdata) j1
     )
;

But trying to update multiple keys like

"rows->>cols->>col_sec_json->>data->>class"

"rows->>cols->>col_sec_json->>to_date" 

"rows->>cols->>col_sec_json->>layout->>fontSize"

How can I update above to achieve this..

output should be like

[
    {
        "rows": [
            {
                "col_class": "col50",
                "col_sec_id": 1626361165906,
                "col_sec_json": null
            },
            {
                "col_class": "col50",
                "col_sec_id": 1626361165907,
                "col_sec_json": {
                    "id": 1626361165907,
                    "data": {
                        "class": "new val",
                         "location": "0",
                        "unitForCurrent": ""
                    },
                    "theme": "defaultTheme",
                    "layout": {
                        "fontSize": 19,
                        "fontStyle": "Open Sans",
                        "textColor": "#545454",
                        "isHeadingAlignmentInherited": true
                    },
                    "org_id": 1,
                    "to_date": "2022-12-31",
                    "interval": "Yearly"
                  }
            }
                   
        ],
        "col_sec_id": 1626360791978,
        "row_cols_count": 2
    },
 {
        "rows": [
            {
                "col_class": "col50",
                "col_sec_id": 1626361165906,
                "col_sec_json": null
            },
            {
                "col_class": "col50",
                "col_sec_id": 1626361165907,
                "col_sec_json": {
                    "id": 1626361165907,
                    "data": {
                        "class": "new val",
                         "location": "0",
                        "unitForCurrent": ""
                    },
                    "theme": "defaultTheme",
                    "layout": {
                        "fontSize": 19,
                        "fontStyle": "Open Sans",
                        "textColor": "#545454",
                        "isHeadingAlignmentInherited": true
                    },
                    "org_id": 1,
                    "to_date": "2022-12-31",
                    "interval": "Yearly"
                  }
            }
                   
        ],
        "col_sec_id": 1626360791978,
        "row_cols_count": 2
    }

]


Get this bounty!!!

#StackBounty: #postgresql #update #deadlock #postgresql-9.5 Possible for two simple updates on the same table to deadlock?

Bounty: 100

Table A is:

id integer
version varchar
data jsonb (large data 1mb)
fkToBid integer (references B.id constraint)

Table B is:

id integer
other... 

Processes are aggressively running the two updates below, in any order and outside any transaction.

Updated records in table A sometimes refer to the same record in table B. Also, sometimes the same A record is updated.

UPDATE A.version WHERE A.id=:id
and
UPDATE A.data WHERE A.id=:id

Why, or can, this deadlock? Is it because updated records in table A refer to the same row in table B? Can this deadlock for another reason?

Why do I see an AccessShareLock on the B pk index for these update requests?


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!

#StackBounty: #postgresql #amazon-rds #postgresql-9.6 #aws #storage Postgres Database flagged as STORAGE FULL whereas there's 45 Gb…

Bounty: 50

I will start this post by stating that I am not very experienced in database administration, please accept my apologies for any unclear explanation you may find below.

We have a replica postgresql hosted on aws rds which stopped replicating last week. The instance was flagge as "Storage-full".

However, when looking at the free storage space on CloudWatch, we realized there were still roughly 46Gb available on the 50Gb-allocated instance. We increased the allocated space to 60Gb and everything went back to normal, but we knew the issue would come back and it did.

The main instance on which this one is replicated is autovacuumed. It is my understanding that any writes resulting on the main will be written on the replica, so this is probably not a vacuum issue.

Looking at cloudwatch metrics, there’s no indication of any problem that might cause this.

It appears logs could be the issue here but I don’t know where to look to investigate this option.

I will edit the question with any relevant information you might suggest in the comments.
Thanks for your help.


Get this bounty!!!