#StackBounty: #postgresql #replication Postgresql 11 logical replication – stuck in `catchup` state

Bounty: 100

I’m running two postgresql 11 servers – master and slave (setup with logical replication).

The problem I’m facing is that today after weeks of uninterrupted work slave got out of sync with this error message:

2019-09-16 07:39:44.332 CEST [30117] ERROR:  could not send data to WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-16 07:39:44.539 CEST [12932] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-16 07:39:44.542 CEST [27972] LOG:  background worker "logical replication worker" (PID 30117) exited with exit code 1

I did see this error message before and my process was to increase wal_sender_timeout on master (more details on this here: logical replication in postgresql – "server closed the connection unexpectedly")

So then I wanted to restore replication however the state of replication is stuck on catchup:

master=# select * from pg_stat_replication;
  pid  | usesysid | usename | application_name  |  client_addr  | client_hostname | client_port |         backend_start         | backend_xmin |  state  |   sent_lsn   |  write_lsn   |  flush_lsn   |  replay_lsn  |    write_lag    |    flush_lag    |   replay_lag    | sync_priority | sync_state
-------+----------+---------+-------------------+---------------+-----------------+-------------+-------------------------------+--------------+---------+--------------+--------------+--------------+--------------+-----------------+-----------------+-----------------+---------------+------------
 86864 |    16680 | my_user    | logical_from_master | 10.10.10.10 |                 |       46110 | 2019-09-16 12:45:56.491325+02 |              | catchup | D55/FA04D4B8 | D55/F9E74158 | D55/F9E44CD8 | D55/F9E74030 | 00:00:03.603104 | 00:00:03.603104 | 00:00:03.603104 |             0 | async
(1 row)

I tried to restart slave a few times, with different combinations of subscription enabled and disabled – nothing helps, the replication status keeps staying on catchup. I can see sent_lsn and write_lsn values are changing so something is being sent through…

I’m not sure if this is relevant but on slave I tried following to see wal receiver status:

slave=# select * from pg_stat_wal_receiver ;
 pid | status | receive_start_lsn | receive_start_tli | received_lsn | received_tli | last_msg_send_time | last_msg_receipt_time | latest_end_lsn | latest_end_time | slot_name | sender_host | sender_port | conninfo
-----+--------+-------------------+-------------------+--------------+--------------+--------------------+-----------------------+----------------+-----------------+-----------+-------------+-------------+----------
(0 rows)

However when I do simple ps -aux | grep postgres:

postgres 26087  0.0  1.1 348788 46160 ?        S    12:45   0:00 /usr/lib/postgresql/11/bin/postgres -D /var/lib/postgresql/11/slave_logical -c config_file=/etc/postgresql/11/slave_logical/postgresql.conf
postgres 26089  0.0  0.2 349108 12080 ?        Ss   12:45   0:00 postgres: 11/slave_logical: checkpointer
postgres 26090  0.0  0.0 348996  3988 ?        Ss   12:45   0:00 postgres: 11/slave_logical: background writer
postgres 26091  0.0  0.1 348788  7204 ?        Ss   12:45   0:00 postgres: 11/slave_logical: walwriter
postgres 26092  0.0  0.1 349740  4396 ?        Ss   12:45   0:00 postgres: 11/slave_logical: autovacuum launcher
postgres 26093  0.0  0.0 170024  3028 ?        Ss   12:45   0:00 postgres: 11/slave_logical: stats collector
postgres 26094  0.0  0.1 349572  4516 ?        Ss   12:45   0:00 postgres: 11/slave_logical: logical replication launcher
postgres 26095  0.0  0.2 350164 10036 ?        Ss   12:45   0:00 postgres: 11/slave_logical: my_user db ::1(56086) idle
postgres 26125  4.5  0.9 359876 36884 ?        Ds   12:45   0:20 postgres: 11/slave_logical: logical replication worker for subscription 37614

So either logical replication does not produce anything within pg_stat_wal_receiver or for some reason worker details do not make it to the sys view. In the same the process seems to be there (or at least I am guessing after reading output of ps -aux above)

This is my slave configuration:

wal_level=logical
max_replication_slots=2
max_logical_replication_workers=4

wal_receiver_timeout=1200000

And this is my master:

wal_level=logical

max_replication_slots=10
max_wal_senders=10

# maximum wait time in milliseconds that the walsender process on the active master
# waits for a status message from the walreceiver process on the standby master.
wal_sender_timeout=1200000

I have no idea what to do (even worst, at this stage I have no idea what to check next…)

Can you help me understand what should I do to make my slave to catch up so it’s back to streaming state?


Edit (12 hours later)

When I checked in the morning synchronisation was still in catchup state

master=# select * from pg_stat_replication;
  pid  | usesysid | usename | application_name  |  client_addr  | client_hostname | client_port |         backend_start         | backend_xmin |  state  |   sent_lsn   |  write_lsn   |  flush_lsn   |  replay_lsn  | write_lag | flush_lag | replay_lag | sync_priority | sync_state
-------+----------+---------+-------------------+---------------+-----------------+-------------+-------------------------------+--------------+---------+--------------+--------------+--------------+--------------+-----------+-----------+------------+---------------+------------
 12965 |    16680 | my_user    | logical_from_master | 10.10.10.10 |                 |       46630 | 2019-09-17 06:40:18.801262+02 |              | catchup | D56/248E13A0 | D56/247E3908 | D56/247E3908 | D56/247E3908 |           |           |            |             0 | async
(1 row)

But when I checked again 60 seconds later the results set was empty…

Logs now show multiple incarnations of the same error:

2019-09-16 22:43:33.841 CEST [20260] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-16 22:43:33.959 CEST [26087] LOG:  background worker "logical replication worker" (PID 20260) exited with exit code 1
2019-09-16 22:43:34.112 CEST [3510] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-16 23:12:01.919 CEST [3510] ERROR:  could not send data to WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-16 23:12:02.073 CEST [26087] LOG:  background worker "logical replication worker" (PID 3510) exited with exit code 1
2019-09-16 23:12:02.229 CEST [4467] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 00:27:01.990 CEST [4467] ERROR:  could not send data to WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 00:27:02.131 CEST [26087] LOG:  background worker "logical replication worker" (PID 4467) exited with exit code 1
2019-09-17 00:27:02.177 CEST [6917] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 01:05:35.121 CEST [6917] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 01:05:35.220 CEST [26087] LOG:  background worker "logical replication worker" (PID 6917) exited with exit code 1
2019-09-17 01:05:35.252 CEST [8204] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 01:49:08.388 CEST [8204] ERROR:  could not send data to WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 01:49:08.520 CEST [26087] LOG:  background worker "logical replication worker" (PID 8204) exited with exit code 1
2019-09-17 01:49:08.583 CEST [9549] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 03:06:19.601 CEST [9549] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 03:06:19.732 CEST [26087] LOG:  background worker "logical replication worker" (PID 9549) exited with exit code 1
2019-09-17 03:06:19.754 CEST [12120] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 03:58:48.184 CEST [12120] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 03:58:48.254 CEST [13781] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 03:58:48.318 CEST [26087] LOG:  background worker "logical replication worker" (PID 12120) exited with exit code 1
2019-09-17 04:27:12.838 CEST [13781] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 04:27:12.931 CEST [26087] LOG:  background worker "logical replication worker" (PID 13781) exited with exit code 1
2019-09-17 04:27:12.967 CEST [14736] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 04:55:48.923 CEST [14736] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 04:55:49.032 CEST [15686] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 04:55:49.043 CEST [26087] LOG:  background worker "logical replication worker" (PID 14736) exited with exit code 1
2019-09-17 05:41:48.526 CEST [15686] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 05:41:48.590 CEST [17164] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 05:41:48.638 CEST [26087] LOG:  background worker "logical replication worker" (PID 15686) exited with exit code 1
2019-09-17 06:03:32.584 CEST [17164] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2019-09-17 06:03:32.642 CEST [17849] LOG:  logical replication apply worker for subscription "logical_from_master" has started
2019-09-17 06:03:32.670 CEST [26087] LOG:  background worker "logical replication worker" (PID 17164) exited with exit code 1
2019-09-17 06:40:18.732 CEST [17849] ERROR:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.

In order to make replication to show up as catchup on master I now have to restart slave first…


Edit 2 – I thought maybe the rate at which sent_lsn is changing would help to establish how busy my servers are:

master=# select now(), * from pg_stat_replication ;
              now              |  pid   | usesysid | usename | application_name  |  client_addr  | client_hostname | client_port |        backend_start         | backend_xmin |  state  |   sent_lsn   |  write_lsn   |  flush_lsn   |  replay_lsn  | write_lag | flush_lag | replay_lag | sync_priority | sync_state
-------------------------------+--------+----------+---------+-------------------+---------------+-----------------+-------------+------------------------------+--------------+---------+--------------+--------------+--------------+--------------+-----------+-----------+------------+---------------+------------
 2019-09-17 08:31:02.547143+02 | 100194 |    16680 | my_user    | logical_from_master | 10.10.10.10 |                 |       46766 | 2019-09-17 08:23:05.01474+02 |              | catchup | D56/24B1BC88 | D56/24A39B58 | D56/24A39B58 | D56/24A39B58 |           |           |            |             0 | async
(1 row)

master=# select now(), * from pg_stat_replication ;
             now              |  pid   | usesysid | usename | application_name  |  client_addr  | client_hostname | client_port |        backend_start         | backend_xmin |  state  |   sent_lsn   |  write_lsn   |  flush_lsn   |  replay_lsn  | write_lag | flush_lag | replay_lag | sync_priority | sync_state
------------------------------+--------+----------+---------+-------------------+---------------+-----------------+-------------+------------------------------+--------------+---------+--------------+--------------+--------------+--------------+-----------+-----------+------------+---------------+------------
 2019-09-17 08:34:02.45418+02 | 100194 |    16680 | my_user    | logical_from_master | 10.10.10.10 |                 |       46766 | 2019-09-17 08:23:05.01474+02 |              | catchup | D56/24B54958 | D56/24A39B58 | D56/24A39B58 | D56/24A39B58 |           |           |            |             0 | async
(1 row)

master=# select now(), * from pg_stat_replication ;
              now              |  pid   | usesysid | usename | application_name  |  client_addr  | client_hostname | client_port |        backend_start         | backend_xmin |  state  |   sent_lsn   |  write_lsn   |  flush_lsn   |  replay_lsn  | write_lag | flush_lag | replay_lag | sync_priority | sync_state
-------------------------------+--------+----------+---------+-------------------+---------------+-----------------+-------------+------------------------------+--------------+---------+--------------+--------------+--------------+--------------+-----------+-----------+------------+---------------+------------
 2019-09-17 08:41:01.815997+02 | 100194 |    16680 | my_user    | logical_from_master | 10.10.10.10 |                 |       46766 | 2019-09-17 08:23:05.01474+02 |              | catchup | D56/24B778B0 | D56/24A39B58 | D56/24A39B58 | D56/24A39B58 |           |           |            |             0 | async
(1 row)

When I look at above I’m worried that write_lsn does not follow sent_lsn – I don’t know if this is correct.

Checkpoints are happening every 15 minutes and I can’t see much IO through iostat.

When I executed select * from pg_stat_replication for the fourth time nothing was showing up (suggesting replication died by itself). Coincidentally checkpoint was completing around that time (I restarted replication on slave and waited until next checkpoint completion time and this time worker have not died).

There is one worrying sign I noticed in logs while looking for checkpoints:

2019-09-17 08:49:02.155 CEST,,,35656,,5d382555.8b48,10873,,2019-07-24 11:31:01 CEST,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""
2019-09-17 08:56:32.086 CEST,,,35656,,5d382555.8b48,10874,,2019-07-24 11:31:01 CEST,,0,LOG,00000,"checkpoint complete: wrote 5671 buffers (0.1%); 0 WAL file(s) added, 0 removed, 1 recycled; write=449.927 s, sync=0.000 s, total=449.930 s; sync files=138, longest=0.000 s, average=0.000 s; distance=19690 kB, estimate=398335 kB",,,,,,,,,""
2019-09-17 09:04:02.186 CEST,,,35656,,5d382555.8b48,10875,,2019-07-24 11:31:01 CEST,,0,LOG,00000,"checkpoint starting: time",,,,,,,,,""
2019-09-17 09:04:51.376 CEST,,,35656,,5d382555.8b48,10876,,2019-07-24 11:31:01 CEST,,0,LOG,00000,"checkpoint complete: wrote 490 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=49.187 s, sync=0.000 s, total=49.190 s; sync files=41, longest=0.000 s, average=0.000 s; distance=3165 kB, estimate=358818 kB",,,,,,,,,""

It seems that session_start_time column refers date from 24th of July. Coincidentally this is when I started replication on master…


Get this bounty!!!

#StackBounty: #postgresql #amazon-web-services #amazon-rds Postgres roles and users – permission denied for table

Bounty: 50

I configured Postgres 11.2 database running on RDS following the instructions in https://aws.amazon.com/blogs/database/managing-postgresql-users-and-roles/

  1. I logged in as the master user created during RDS creation
  2. Executed CREATE SCHEMA myschema;
  3. Executed script from the link above
-- Revoke privileges from 'public' role
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON DATABASE mydatabase FROM PUBLIC;

-- Read-only role
CREATE ROLE readonly;
GRANT CONNECT ON DATABASE mydatabase TO readonly;
GRANT USAGE ON SCHEMA myschema TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA myschema TO readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO readonly;

-- Read/write role
CREATE ROLE readwrite;
GRANT CONNECT ON DATABASE mydatabase TO readwrite;
GRANT USAGE, CREATE ON SCHEMA myschema TO readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA myschema TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA myschema TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT USAGE ON SEQUENCES TO readwrite;

-- Users creation
CREATE USER reporting_user1 WITH PASSWORD 'some_secret_passwd';
CREATE USER reporting_user2 WITH PASSWORD 'some_secret_passwd';
CREATE USER app_user1 WITH PASSWORD 'some_secret_passwd';
CREATE USER app_user2 WITH PASSWORD 'some_secret_passwd';

-- Grant privileges to users
GRANT readonly TO reporting_user1;
GRANT readonly TO reporting_user2;
GRANT readwrite TO app_user1;
GRANT readwrite TO app_user2;

After that I connected in as app_user1 and created a new table and added one row to it. Then, I connected using reporting_user1 and tried to SELECT * FROM that new table but saw following message on the console:

ERROR:  permission denied for table first_table
SQL state: 42501

What am I missing in my configuration? I expect the reporting_user1 to have read access to all tables created by the app_user1 in myschema.


Get this bounty!!!

#StackBounty: #python #django #python-3.x #postgresql How do I write a Django query that finds words in a Postgres column?

Bounty: 50

I’m using Django and Python 3.7. How do I scan for words in a Django query? A word is a string surrounded by whitespace (or the beginning or end of a line). I have this …

def get_articles_with_words_in_titles(self, long_words):
    qset = Article.objects.filter(reduce(operator.or_, (Q(title__icontains=x) for x in long_words)))
    result = set(list(qset))

but if “long_words” contains things like [“about”, “still”], it will match Articles whose titles have things like “whereabouts” or “stillborn”. Any idea how to modify my query to incorporate word boundaries?


Get this bounty!!!

#StackBounty: #ruby-on-rails #json #postgresql #jsonb How to update different json (jsonb) attributes asynchronously with rails and pos…

Bounty: 50

I have a large json object stored in a postgres table column, an example of the schema is like this:

create_table “document” do |t|
    t.jsonb “data”, default: []
end

At the moment I’m updating json in the column like so:

# find document in rails then…
doucment.data[‘some_attribute’][2][‘another_attribute’] = 100
doucment.save

However I write this json attribute many times and sometimes data becomes lost because if two calls are writing it at the same time, then the whole object will be saved or over written with the current objects old data.

For example if there’s two different saves going through at the same with the following

Save 1:

doucment.data[‘some_attribute’][2][‘another_attribute’] = 100
doucment.save

Save 2:

doucment.data[‘some_attribute’][2][‘different_attribute’] = 200
doucment.save

then either one of the attributes data will be lost because the other one will save it’s json but with old data that hasn’t been refreshed yet.

What is the best way to go about making both calls save the new data correctly.

Is there any json method that can just go in and update one attribute, like update_attribute but for a jsonb attribute?


Get this bounty!!!

#StackBounty: #18.04 #mysql #postgresql #odbc How to install and configure the latest ODBC driivers for both MYSQL & PostgreSQL in …

Bounty: 100

I’m currently trying to get access to some MYSQL and PostgreSQL databases via an ODBC connection as I had on Windows. After searching around, I have only found scattered and old tutorials for installing and setting up ODBC connections on Ubuntu.

Can someone help me with a more up to date instructions as I am working in a delicate production environment so the last thing I need a screwup? Thank in advance.


Get this bounty!!!

#StackBounty: #python #object-oriented #sql #playing-cards #postgresql OOP Python Blackjack game with player accounts in PostgreSQL

Bounty: 50

This is an OOP version of Lastest version of my Blackjack game. Also, it now uses PostgreSQL as database.

import random
import typing as t
from enum import Enum
from functools import partial
import os
from getpass import getpass
from re import match
import bcrypt
import psycopg2
import attr


lock = partial(attr.s, auto_attribs=True, slots=True)
State = Enum("State", "IDLE ACTIVE STAND BUST")


def clear_console():
    os.system("cls" if os.name == "nt" else "clear")


def start_choice():
    while True:
        ans = input(
            "nWhat do you want to do?n[1] - Start playingn[2] - Display the topn> "
        )
        if ans in ("1", "2"):
            return ans == "1"


def ask_question(question):
    while True:
        print(f"{question} (y/n)?")
        ans = input("> ").casefold()
        if ans in ("y", "n"):
            return ans == "y"


def ask_bet(budget):
    clear_console()
    print(f"Money: ${budget}")
    print("How much money do you want to bet?")
    while True:
        money_bet = input("> ")
        try:
            cash_bet = int(money_bet)
        except ValueError:
            cash_bet = -1
        if budget >= cash_bet > 0:
            return cash_bet
        print("Please input a valid bet.")


def get_user_credentials():
    clear_console()
    while True:
        email = input("Email address (max. 255 chars.):n> ")
        password = getpass("Password (max. 1000 chars.):n> ").encode("utf8")
        hashed_pw = bcrypt.hashpw(password, bcrypt.gensalt()).decode("utf8")
        if len(email) < 255 and len(password) < 1000:
            if match(r"[^@]+@[^@]+.[^@]+", email):
                return email, password, hashed_pw
            print("Please input a valid email address.")


def build_deck():
    suits = ["Hearts", "Clubs", "Diamonds", "Spades"]
    values = ["2", "3", "4", "5", "6", "7", "8", "9", "10", "J", "Q", "K", "A"]
    cards = [Card(value, suit) for value in values for suit in suits]
    return cards


@lock
class Card:

    value: str
    suit: str

    def score(self):
        if self.value in "JQK":
            return 10
        elif self.value == "A":
            return 1
        else:
            return int(self.value)

    def __str__(self):
        return f"{self.value} of {self.suit}"


@lock
class Shoe:

    cards: t.List[Card] = attr.ib(factory=build_deck)

    def shuffle(self):
        random.shuffle(self.cards)

    def draw_card(self):
        return self.cards.pop()

    def __str__(self):
        cards = [str(c) for c in self.cards]
        return str(cards)


@lock
class Hand:
    # A collection of cards that a player get from the dealer in a game
    cards: t.List[Card] = attr.ib(default=[])

    def add(self, card):
        self.cards.append(card)

    def score(self):
        # Value of cards at hand
        total = sum(card.score() for card in self.cards)

        if any(card.value == "A" for card in self.cards) and total <= 11:
            total += 10

        return total

    def __str__(self):
        return "{} ({})".format(
            "".join("[{}]".format(card.value) for card in self.cards), self.score()
        )


@lock
class Player:

    budget: int  # Number of money for bets
    bet: int = attr.ib(default=None)  # Money bet
    hand: Hand = attr.ib(factory=Hand)  # Player's hand
    state: State = attr.ib(default=State.IDLE)  # can be IDLE, ACTIVE, STAND or BUST

    def player_bet(self):
        if self.is_broke():
            raise Exception("Unfortunately you don't have any money.")
        self.bet = ask_bet(self.budget)

    """ Update self.state after self.hit
        If player busted, self.state = State.BUST, etc.
    """

    def update(self):
        hand_score = self.hand.score()
        if hand_score > 21:
            self.state = State.BUST
        elif hand_score == 21:
            self.state = State.STAND
        else:
            self.state = State.ACTIVE

    def is_busted(self):
        return self.state == State.BUST

    def is_standing(self):
        return self.state == State.STAND

    def is_idle(self):
        return self.state == State.IDLE

    def is_broke(self):
        return self.budget == 0

    def hit(self, dealer):
        # Ask dealer to add a card to the hand (at their turn)
        card = dealer.draw_card()
        self.hand.add(card)

    def play(self, dealer):
        if ask_question("Do you want to hit"):
            # Player hits
            self.hit(dealer)
            self.update()
        else:
            self.state = State.STAND

    def __str__(self):
        return f"Player Info:nBudget: {self.budget}nMoney bet: {self.bet}nHand: {self.hand}"


@lock
class Dealer:

    shoe: Shoe = attr.ib(factory=Shoe)
    hand: Hand = attr.ib(factory=Hand)
    state: State = attr.ib(default=State.IDLE)

    def draw_card(self):  # Delegate method
        card = self.shoe.draw_card()
        return card

    def hit(self):
        card = self.draw_card()
        self.hand.add(card)

    def update(self):
        hand_score = self.hand.score()
        if hand_score > 21:
            self.state = State.BUST
        elif hand_score >= 17:
            self.state = State.STAND
        else:
            self.state = State.ACTIVE

    def is_busted(self):
        return self.state == State.BUST

    def is_standing(self):
        return self.state == State.STAND

    def is_idle(self):
        return self.state == State.IDLE

    def play(self):
        if self.hand.score() < 17:
            self.hit()
            self.update()

    """ In this method, the dealer and player enter a loop
        In which the player hits a card from the dealer until it stands or busts
    """

    def deal(self, player, game):
        while True:
            player.play(self)
            game.display_info()
            if player.is_busted() or player.is_standing():
                break

    def display_cards(self, player, game):
        if game.is_finished():
            return f"Dealer Info:nHand:{self.hand}"
        elif player.state == State.ACTIVE:
            return f"Dealer Info:nHand: [{self.hand.cards[0]}][?]"


@lock
class Database:

    sql_id: int = attr.ib(default=None)
    email: str = attr.ib(default=None)
    password: str = attr.ib(default=None)
    hashed_pw: str = attr.ib(default=None)
    budget: int = attr.ib(default=None)
    conn: t.Any = attr.ib(
        default=psycopg2.connect(
            dbname="blackjack", user="postgres", password="12344321", host="localhost"
        )
    )
    cur: t.Any = attr.ib(default=None)

    def check_account(self):
        self.cur.execute("SELECT id FROM users WHERE email=%s", (self.email,))
        return bool(self.cur.fetchone())

    def login(self):
        self.cur.execute("SELECT password FROM users WHERE email=%s", (self.email,))
        credentials = self.cur.fetchone()
        correct_hash = credentials[0].encode("utf8")
        if bcrypt.checkpw(self.password, correct_hash):
            print("You have successfully logged-in!")
        else:
            raise Exception("You have failed logging-in!")

    def register(self):
        self.cur.execute(
            "INSERT into users (email, password) VALUES (%s, %s)",
            (self.email, self.hashed_pw),
        )

    def initialize(self):
        with self.conn:
            self.email, self.password, self.hashed_pw = get_user_credentials()
            self.cur = self.conn.cursor()
            checked = self.check_account()
            if checked:
                self.login()
            else:
                self.register()
                print("You have successfully registered and received $1000 as a gift!")
            self.cur.execute(
                "SELECT ID, budget FROM users WHERE email=%s", (self.email,)
            )
            sql_id_budget = self.cur.fetchone()
            self.sql_id = sql_id_budget[0]
            self.budget = sql_id_budget[1]

    def display_top(self):
        self.cur.execute("SELECT email, budget FROM users ORDER BY budget DESC")
        top = self.cur.fetchall()
        places = range(1, len(top) + 1)
        for (a, b), i in zip(top, places):
            print(f"{i}. {a} - ${b}")

    def update_budget(self):
        self.cur.execute(
            "UPDATE users SET budget=%s WHERE id=%s", (self.budget, self.sql_id)
        )
        self.conn.commit()


@lock
class Game:

    player: Player
    dealer: Dealer = attr.ib(factory=Dealer)

    def reset_attributes(self):
        self.player.hand.cards = []
        self.player.state = State.IDLE
        self.dealer.hand.cards = []
        self.dealer.state = State.IDLE
        self.dealer.shoe = Shoe()

    def open(self):

        self.player.player_bet()

        self.dealer.shoe.shuffle()

        c1 = self.dealer.draw_card()
        c2 = self.dealer.draw_card()
        self.player.hand = Hand([c1, c2])
        self.player.update()  # Update player state

        # The dealer is the last one to get cards
        c1 = self.dealer.draw_card()
        c2 = self.dealer.draw_card()
        self.dealer.hand = Hand([c1, c2])
        self.dealer.update()

        self.display_info()

    def is_finished(self):
        if self.dealer.hand.score() >= 21:
            return True
        if self.player.is_busted() or self.player.is_standing():
            return True

    """ Pay/charge the player according to cards result
        Reset hands, states, shoe
    """

    def close(self):
        dealer_score = self.dealer.hand.score()
        if not self.player.is_busted():

            if self.dealer.state == State.BUST:
                self.player.budget += self.player.bet * 2
            else:
                if self.player.hand.score() < dealer_score:
                    self.player.budget -= self.player.bet
                elif self.player.hand.score() > dealer_score:
                    self.player.budget += self.player.bet * 2
        else:
            self.player.budget -= self.player.bet

        self.display_info()

    def run(self):
        # Run a full game, from open() to close()
        self.open()

        # If the dealer has a blackjack, close the game
        if self.is_finished():
            self.close()
            return

        # The dealer deals with the player
        self.dealer.deal(self.player, self)

        # Now the dealer's turn to play ...
        while True:
            self.dealer.play()
            if self.dealer.is_busted() or self.dealer.is_standing():
                break

        self.close()

    def display_info(self):
        clear_console()
        print(f"{self.player}n")
        print(f"{self.dealer.display_cards(self.player, self)}n")
        player_score = self.player.hand.score()
        dealer_score = self.dealer.hand.score()
        if player_score == 21:
            print("Blackjack! You won!")
        elif dealer_score == 21:
            print("Dealer has got a blackjack. You lost!")
        elif self.player.is_busted():
            print("Busted! You lost!")
        elif self.player.is_standing():
            if self.dealer.is_busted():
                print("Dealer busted! You won!")
            elif player_score > dealer_score:
                print("You beat the dealer! You won!")
            elif player_score < dealer_score:
                print("Dealer has beaten you. You lost!")
            else:
                print("Push. Nobody wins or losses.")


def main():
    database = Database()
    database.initialize()
    if start_choice():
        player = Player(database.budget)
        game = Game(player)
        playing = True
        while playing:
            game.run()
            database.budget = player.budget
            database.update_budget()
            playing = ask_question("nDo you want to play again")
            if playing:
                game.reset_attributes()
            else:
                database.cur.close()
    else:
        database.display_top()


if __name__ == "__main__":
    main()


Get this bounty!!!

#StackBounty: #python #json #postgresql #orm #sqlalchemy SQLALchemy: Query a single key or subset of keys of a Postgres JSONB column

Bounty: 50

I have a Postgres table that has a JSONB column. How do I query data of this column without loading the whole column at once in SQLAlchemy?

Let’s say the JSONB column myjsonb contains {'a': 1, 'b': 2, 'c': 3, ... 'z': 26}. I only want the value of 'a' and not all 26 values. How do I specify a query to do that?

For example,

query = session.query(MyTable).options(defer('myjsonb')).join(MyTable.myjsonb['a'])

does not work.

Any idea how I can only retrieve 'a'? And what happens if the key 'a' is not present? And how can I load multiple keys, let’s say 'b' to ‘f', but not all of them at once? Thanks!


Get this bounty!!!

#StackBounty: #postgresql #greenplum #postgresql-fdw Postgres_fdw Transaction isolation issue

Bounty: 50

My case I have connected to another GP DB to import data into my PostgreSQL tables and written Java schedulers to refresh it Daily. But when I’m trying to fetch the records everyday by using SQL functions, it’s giving me an error Greenplum Database does not support REPEATABLE READ transactions. So, Can one suggest me how can I load data in frequent from GP to postgres without isolation hassle.

I knew to execute to refreh the tables by

START TRANSACTION ISOLATION LEVEL SERIALIZABLE;

But, I’m not able to use the same in the functions due to transactions blocks.


Get this bounty!!!

#StackBounty: #ruby-on-rails #postgresql #activerecord Is it OK to specify a schema in `table_name_prefix`?

Bounty: 50

TL;DR: Is it OK to specify a schema in table_name_prefix?

We have a large Rails application that is not quite a traditional multi-tenant app. We have a hundred clients, all supported by one app, and that number will never grow more than 1-2 per year. Currently, every client has their own Postgresql database.

We are addressing some infrastructure concerns of having so many distinct databases…most urgently, a high number of simultaneous database connections when processing many clients’ data at the same time.

The app is not visible, even to clients, so a lot of traditional multi-tenant web site philosophies don’t apply here neatly.

  • Each tenant has a distinct Postgres database, managed in
    database.yml.
  • Each database has a schema, named for the tenant.
  • We have a model specific to each tenant with notably different code.
  • Each model uses establish_connection to select a different database and schema.
  • Each model uses a distinct table_name_prefix with the client’s unique name.

The tables vary extensively for each tenant. There is no hope or desire to normalize the clients together. Clients are not provisioned dynamically — it is always a new code release with migrations.

We intend to move each of the client schemas into one database, so fewer distinct connection pools are required. The unique names we currently have at the database, schema, and table names mean there is no possibility of name collisions.

We’ve looked at the Apartment gem, and decided it is not a good fit for what we’re doing.

We could add all hundred schemas to schema_search_path, so all clients could share the same connection pool and still find their schema. We believe this would reduce our db connection count one-hundred-fold. But we’re a bit uneasy about that. I’ve found no discussions of how many are too many. Perhaps that would work, and perhaps there would not have a performance penalty finding tables.

We’ve found a very simple solution that seems promising, by adding the schema in the table_name_prefix. We’re already setting this like:

def self.table_name_prefix
  'client99_'
end

Through experimenting and looking within Rails 4 (our current version) and Rails 5 source code, this works to specify the schema (‘tenant_99’) as well as the traditional table prefix (‘client99’) :

def self.table_name_prefix
  'tenant_99.client99_'
end

Before that change, queries looked like this:

SELECT COUNT(*) FROM "client99_products"

After, they include the schema, as desired:

SELECT COUNT(*) FROM "tenant_99.client99_products"

This seems to answer our needs, with no downsides. I’ve searched the Interwebs for people encouraging or discouraging this practice, and found no mention of it either way.

So through all this, here are the questions I haven’t found definitive answers for:

  • Is there a concern of having too many schemas listed in schema_search_path?
  • Is putting a schema name in table_name_prefix okay?


Get this bounty!!!

#StackBounty: #postgresql #hstore #pg-restore Can not install hstore extension to new created schema

Bounty: 100

I have an ubuntu 18.04 with postgres 9.5 installed.

My db “mydb” has the hstore installed. When I do “dx store”, I do have

List of installed extensions
  Name  | Version | Schema |                   Description                    
--------+---------+--------+--------------------------------------------------

hstore | 1.3     | public | data type for storing sets of (key, value) pairs
(1 row)

When I do a pg_restore with a certain backup file, a new schema also called “mydb” is created, but it does not contain the “hstore” extension. The result of the “dx” command is the same. hstore is in my template1 already.

The pg_restore fails with

pg_restore: [archiver (db)] could not execute query: ERROR: type “hstore” does not exist

Can anyone point out where the problem is?

Thanks


Get this bounty!!!