#StackBounty: #postgresql #ruby-on-rails-4 #heroku Regularly seeing PG::InFailedSqlTransaction error when deploying rails app to heroku

Bounty: 50

Whenever we deploy our Rails/Postgres app and a migration is part of the deploy we get the following error:

PG::InFailedSqlTransaction: ERROR: current transaction is aborted,
commands ignored until end of transaction block

PG::FeatureNotSupported: ERROR: cached plan must not change result
type

The offending SQL transaction is usually different.

I’m wondering if there is a way to prevent this from happening when we deploy?


Get this bounty!!!

#StackBounty: #postgresql #full-text-search #jsonb #postgresql-9.5 #postgresql-9.6 How to implement full text search on complex nested …

Bounty: 50

I have pretty complex JSONB stored in one jsonb column.

DB table looks like:

 CREATE TABLE sites (
   id text NOT NULL,
   doc jsonb,
   PRIMARY KEY (id)
 )

Data we are storing in doc column is a complex nested JSONB data:

   {
      "_id": "123",
      "type": "Site",
      "identification": "Custom ID",
      "title": "SITE 1",
      "address": "UK, London, Mr Tom's street, 2",
      "buildings": [
          {
               "uuid": "12312",
               "identification": "Custom ID",
               "name": "BUILDING 1",
               "deposits": [
                   {
                      "uuid": "12312",
                      "identification": "Custom ID",             
                      "audits": [
                          {
                             "uuid": "12312",         
                              "sample_id": "SAMPLE ID"                
                          }
                       ]
                   }
               ]
          } 
       ]
    }

So structure of my JSONB looks like:

SITE 
  -> ARRAY OF BUILDINGS
     -> ARRAY OF DEPOSITS
       -> ARRAY OF AUDITS

We need to implement full text search by some values in each of type of entry:

SITE (identification, title, address)
BUILDING (identification, name)
DEPOSIT (identification)
AUDIT (sample_id)

SQL query should run a full text search in these field values only.

I guess need to use GIN indexes and something like tsvector, but do not have enough Postgresql background.

So, my question is it possible to index and then query such nested JSONB structures?


Get this bounty!!!

#StackBounty: #sql #ruby-on-rails #postgresql Rails/Postgres query all records created after 5 PM in taking into account daylight savin…

Bounty: 50

I am looking to query for all records on a table with a created_at time of after 5 PM, EST WITH taking into account daylight savings.

My DB is Postgres and all timestamps are stored in UTC as a normal Rails app.

So, say I have four records

ID, created_at, time in EST (-5 offset from UTC),

1, "2017-01-01 22:46:21.829333", 5:46 PM
2, "2017-01-01 21:23:27.259393", 4:23 PM

-------- DST SWITCH -----

ID, created_at, time in EDT (-4 offset from UTC),

3, "2017-03-20 21:52:46.135713", 5:52 PM
4, "2017-06-21 20:08:53.034377", 4:08 PM

My query should return records 1 and 3, but should ignore 2 and 4.

I have tried

SELECT "id",
       "created_at"
FROM "orders"
WHERE (created_at::TIME WITHOUT TIME ZONE AT TIME ZONE 'utc' AT TIME ZONE 'US/Eastern' > ("created_at"::DATE + TIME '21:00')::TIME AT TIME ZONE 'utc' AT TIME ZONE 'US/Eastern')
ORDER BY "orders"."created_at" ASC

But this query will return record 1,2,3 when it should not return 2.


Get this bounty!!!

#StackBounty: #postgresql #foreign-key #pgpool Drop foreign key locks referenced table

Bounty: 50

Recently I wanted to drop a FK on table X referencing the userid of the “user” table. The user table is of course very frequently read from, but still I assumed it would be safe to drop the FK without a database downtime. My reasoning was that the user table should not have to be locked while removing the FK from table X.

When executing

ALTER TABLE x drop constraint fk12345;

though, the query took very long, the database load increased significantly and the query had to be aborted.

So my question is: Does removing a FK lock the referenced table? If not, what else might be the explanation for the long duration?

Extra info: The query was run against two postgres9.3 instances running behind pgpool.


Get this bounty!!!

#StackBounty: #postgresql #geometry #postgresql-9.6 Query using subset of path in Postgres

Bounty: 50

Given this table:

 id |            points (path)                 |
----+------------------------------------------+
  1 | ((1,2),(3,4),(5,6),(7,8))                |

Is it possible to achieve the following using a single geometric operator and a path argument (sequential subset of the containing path), like ((3,4),(5,6))?

select * from things where points @> '(3,4)' and points @> '(5,6)';


Get this bounty!!!

#StackBounty: #postgresql #lua #openresty Connecting to postgresql from lapis

Bounty: 50

I decided to play with lapis – https://github.com/leafo/lapis, but the application drops when I try to query the database (PostgreSQL) with the output:

2017/07/01 16:04:26 [error] 31284#0: *8 lua entry thread aborted: runtime error: attempt to yield across C-call boundary
stack traceback:
coroutine 0:
[C]: in function ‘require’
/usr/local/share/lua/5.1/lapis/init.lua:15: in function ‘serve’
content_by_lua(nginx.conf.compiled:22):2: in function , client: 127.0.0.1, server: , request: “GET / HTTP/1.1”, host: “localhost:8080”

The code that causes the error:

local db = require("lapis.db")
local res = db.query("SELECT * FROM users");

config.lua:

config({ "development", "production" }, {
    postgres = {
        host = "0.0.0.0",
        port = "5432",
        user = "wars_base",
        password = "12345",
        database = "wars_base"
    }
})

The database is running, the table is created, in table 1 there is a record.

What could be the problem?


Get this bounty!!!

#StackBounty: #python #postgresql #dataset Best way to access and close a postgres database using python dataset

Bounty: 300

import dataset    
from sqlalchemy.pool import NullPool

db = dataset.connect(path_database, engine_kwargs={'poolclass': NullPool})

table_f1 = db['name_table']
# Do operations on table_f1

db.commit()
db.executable.close()

I use this code to access a postgres database and sometimes write to it. Finally, I close it. Is the above code the best way to access and close it? Alternatively, is the code below better?

import dataset    
from sqlalchemy.pool import NullPool

with dataset.connect(path_database, engine_kwargs={'poolclass': NullPool}) as db:
    table_f1 = db['name_table']
    # Do operations on table_f1

    db.commit()

In particular, I want to make 100% sure that there is no connection to the postgres database once this piece of code is done. Which is the better way to achieve it? option 1 or option 2?


Get this bounty!!!

#StackBounty: #ruby-on-rails #postgresql #activerecord Commit a nested transaction

Bounty: 50

Let’s say I have a method that provides access to an API client in the scope of a user and the API client will automatically update the users OAuth tokens when they expire.

class User < ActiveRecord::Base

  def api
    ApiClient.new access_token: oauth_access_token,
                  refresh_token: oauth_refresh_token,
                  on_oauth_refresh: -> (tokens) {
                    # This proc will be called by the API client when an 
                    # OAuth refresh occurs
                    update_attributes({
                      oauth_access_token: tokens[:access_token],
                      oauth_refresh_token: tokens[:refresh_token]
                     })
                   }
  end

end

If I consume this API within a Rails transaction and a refresh occurs and then an error occurs – I can’t persist the new OAuth tokens (because the proc above is also treated as part of the transaction):

u = User.first

User.transaction { 
  local_info = Info.create!

  # My tokens are expired so the client automatically
  # refreshes them and calls the proc that updates them locally.
  external_info = u.api.get_external_info(local_info.id)

  # Now when I try to locally save the info returned by the API an exception
  # occurs (for example due to validation). This rolls back the entire 
  # transaction (including the update of the user's new tokens.)
  local_info.info = external_info 
  local_info.save!
}

I’m simplifying the example but basically the consuming of the API and the persistence of data returned by the API need to happen within a transaction. How can I ensure the update to the user’s tokens gets committed even if the parent transaction fails.


Get this bounty!!!

#StackBounty: #postgresql #vacuum #postgresql-8.4 delete hung soon after vacuum verbose analyse

Bounty: 50

I have a fairly large java webapp system that connects to a postgres8.4 db on rhel.
This app only does inserts and reads.

This has been running without issue for 2.5 years and for the last year or so I implemented a data deletion script which deletes up to 5000 parent records and the corresponding child data in 3 tables.

This deletion script is called from a cron job every 5 minutes. It has been working flawlessly for over a year. This script always takes about 1-3 seconds to complete.

Also called from cron is vacuum verbose analyse script which is just called once a day (a few minutes before a deletion). This also has been working flawlessly for over a year. This takes about 15 minutes to complete.

Now last weekend (Saturday), the vacuum kicked off at 14:03, the deletion kicked off at 14:07 and did not complete. The next deletion kicked off at 14:12 and encountered a deadlock. The webapp at 14:14 hung. At 14:16 everything resumed as per normal and has been running fine since.

Now

To add more confusion, this server has an almost identical setup (standby sever) however the vacumm cron is due to run at 02:03 in the morning. When the vacuum kicked off on Sunday at 02:03, the same situation as above was encountered and at 02:14 the java app hung resuming at 02:15.

More

To further confuse me:
every time I go to the site I takes reading of df – this is always around 40% but after this happened, the df now reports 5% less

Any ideas? Please let me know if I have left out any relevant information.

edit

This is the Postgresql-Sat.log

WARNING:  skipping "pg_authid" --- only superuser can vacuum it
WARNING:  skipping "pg_database" --- only superuser can vacuum it
WARNING:  skipping "pg_tablespace" --- only superuser can vacuum it
WARNING:  skipping "pg_pltemplate" --- only superuser can vacuum it
WARNING:  skipping "pg_shdepend" --- only superuser can vacuum it
WARNING:  skipping "pg_shdescription" --- only superuser can vacuum it
WARNING:  skipping "pg_auth_members" --- only superuser can vacuum it
ERROR:  deadlock detected
DETAIL:  Process 12729 waits for ShareLock on transaction 91311423; blocked by process 12800.
    Process 12800 waits for ShareLock on transaction 91311422; blocked by process 12729.
    Process 12729: delete from child1 where id in
    (
    select id from parent where date_collected < now() - interval '13 months' order by id limit 5000
    );
    Process 12800: delete from child1 where id in
    (
    select id from parent where date_collected < now() - interval '13 months' order by id limit 5000
    );


Get this bounty!!!

#StackBounty: #database #cloud-service #synchronization #postgresql Sync local postgres db to a cloud postgres db, as a Windows Service

Bounty: 100

I’m looking for a software that will sync a local postgres db to a db in the cloud, like what Tableau can do with this service.

This software has to be installed like a Windows service. It can be installed in any Windows machine.

The data will be pulled from the local db and refreshed to a postgres db, hosted in the cloud.

The way is always local to remote and the refreshing period is within a frame of a few minutes.

The service and the cloud host can be the same provider. Actually we are looking for a solution that will be part of the same ecosystem.

Thanks


Get this bounty!!!