#StackBounty: #mysql #mysql-5.7 #crash MySql: Frequent crashes on server with substantial resources

Bounty: 50

My MySQL server keeps crashing. Once or twice a day. Sometimes it repairs itself fairly quickly (10 minutes). Sometimes I have to reboot and do an fsck to get things running again.

I’m running the latest versions of Ubuntu 16.x and MySql.

I often see a spike in memory and disk usage during the process.

What would be causing this?

I’ve increased RAM a bunch. Currently at 3GB but I can give it more. I tried upgrading the server’s MotherBoard, processor, RAM (DDR4) but that didn’t help (it was running a 7 year old processor, now running 7th Gen Core I5).

Here is a typical error log:

2017-04-20T18:43:46.958430Z 0 [Note] InnoDB: page_cleaner: 1000ms
intended loop took 11791ms. The settings might not be optimal. (flu
shed=92 and evicted=0, during the time.) 2017-04-20T18:44:11.989905Z 0
[Note] InnoDB: page_cleaner: 1000ms intended loop took 6822ms. The
settings might not be optimal. (flus hed=8 and evicted=0, during the
time.) 2017-04-20T18:44:49.145162Z 0 [Note] InnoDB: page_cleaner:
1000ms intended loop took 5021ms. The settings might not be optimal.
(flus hed=0 and evicted=0, during the time.)
2017-04-20T18:45:22.322429Z 0 [Note] InnoDB: page_cleaner: 1000ms
intended loop took 26338ms. The settings might not be optimal. (flu
shed=10 and evicted=0, during the time.) 2017-04-20T18:45:53.926808Z 0
[Note] InnoDB: page_cleaner: 1000ms intended loop took 4510ms. The
settings might not be optimal. (flus hed=0 and evicted=0, during the
time.) 2017-04-20T18:46:03.097400Z 0 [Note] InnoDB: page_cleaner:
1000ms intended loop took 5384ms. The settings might not be optimal.
(flus hed=13 and evicted=0, during the time.)
2017-04-20T18:46:39.247467Z 0 [Note] InnoDB: page_cleaner: 1000ms
intended loop took 14848ms. The settings might not be optimal. (flu
shed=8 and evicted=0, during the time.) 2017-04-20T18:47:16.271672Z 0
[Note] InnoDB: page_cleaner: 1000ms intended loop took 29107ms. The
settings might not be optimal. (flu shed=8 and evicted=0, during the
time.) 2017-04-20T18:47:53.669557Z 0 [Note] InnoDB: page_cleaner:
1000ms intended loop took 5969ms. The settings might not be optimal.
(flus hed=37 and evicted=0, during the time.)
2017-04-20T18:50:23.879411Z 0 [Note] InnoDB: page_cleaner: 1000ms
intended loop took 37671ms. The settings might not be optimal. (flu
shed=6 and evicted=0, during the time.) 2017-04-20T18:55:07.190725Z 0
[Warning] Changed limits: max_open_files: 1024 (requested 5000)
2017-04-20T18:55:07.235759Z 0 [Warning] Changed limits:
table_open_cache: 431 (requested 2000) 2017-04-20T18:55:10.486670Z 0
[Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please
use –explicit_defaults_for_times tamp server option (see
documentation for more details). 2017-04-20T18:55:11.563578Z 0 [Note]
/usr/sbin/mysqld (mysqld 5.7.17-0ubuntu0.16.04.2) starting as process
24701 … 2017-04-20T18:55:21.979225Z 0 [Note] InnoDB: PUNCH HOLE
support available 2017-04-20T18:55:21.979250Z 0 [Note] InnoDB: Mutexes
and rw_locks use GCC atomic builtins 2017-04-20T18:55:21.979253Z 0
[Note] InnoDB: Uses event mutexes 2017-04-20T18:55:21.979256Z 0 [Note]
InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-04-20T18:55:21.979259Z 0 [Note] InnoDB: Compressed tables use
zlib 1.2.8 2017-04-20T18:55:21.979262Z 0 [Note] InnoDB: Using Linux
native AIO 2017-04-20T18:55:22.004800Z 0 [Note] InnoDB: Number of
pools: 1 2017-04-20T18:55:22.060762Z 0 [Note] InnoDB: Using CPU crc32
instructions 2017-04-20T18:55:22.104584Z 0 [Note] InnoDB: Initializing
buffer pool, total size = 128M, instances = 1, chunk size = 128M
2017-04-20T18:55:24.184701Z 0 [Note] InnoDB: Completed initialization
of buffer pool 2017-04-20T18:55:24.210160Z 0 [Note] InnoDB: If the
mysqld execution user is authorized, page cleaner thread priority can
be changed. See the man page of setpriority().
2017-04-20T18:55:26.405242Z 0 [Note] InnoDB: Highest supported file
format is Barracuda. 2017-04-20T18:55:27.508456Z 0 [Note] InnoDB: Log
scan progressed past the checkpoint lsn 35288448161
2017-04-20T18:55:27.508478Z 0 [Note] InnoDB: Doing recovery: scanned
up to log sequence number 35288448170 2017-04-20T18:55:27.508630Z 0
[Note] InnoDB: Doing recovery: scanned up to log sequence number
35288448170 2017-04-20T18:55:27.508634Z 0 [Note] InnoDB: Database was
not shutdown normally! 2017-04-20T18:55:27.508637Z 0 [Note] InnoDB:
Starting crash recovery.

2017-04-20T18:56:16.516761Z 0 [Note] InnoDB: Removed temporary
tablespace data file: “ibtmp1” 2017-04-20T18:56:16.516785Z 0 [Note]
InnoDB: Creating shared tablespace for temporary tables
2017-04-20T18:56:16.516817Z 0 [Note] InnoDB: Setting file ‘./ibtmp1’
size to 12 MB. Physically writing the file full; Please wait …
2017-04-20T18:56:16.621736Z 0 [Note] InnoDB: File ‘./ibtmp1’ size is
now 12 MB. 2017-04-20T18:56:16.622203Z 0 [Note] InnoDB: 96 redo
rollback segment(s) found. 96 redo rollback segment(s) are active.
2017-04-20T18:56:16.622211Z 0 [Note] InnoDB: 32 non-redo rollback
segment(s) are active. 2017-04-20T18:56:16.622565Z 0 [Note] InnoDB:
Waiting for purge to start 2017-04-20T18:56:16.672708Z 0 [Note]
InnoDB: 5.7.17 started; log sequence number 35288448170
2017-04-20T18:56:16.672708Z 0 [Note] InnoDB: page_cleaner: 1000ms
intended loop took 52462ms. The settings might not be optimal.
(flushed=0 and evicted=0, during the time.)
2017-04-20T18:56:16.673192Z 0 [Note] InnoDB: Loading buffer pool(s)
from /var/lib/mysql/ib_buffer_pool 2017-04-20T18:56:16.702959Z 0
[Note] Plugin ‘FEDERATED’ is disabled. 2017-04-20T18:56:16.851553Z 0
[ERROR] Function ‘archive’ already exists 2017-04-20T18:56:16.851568Z
0 [Warning] Couldn’t load plugin named ‘archive’ with soname
‘ha_archive.so’. 2017-04-20T18:56:16.851574Z 0 [ERROR] Function
‘blackhole’ already exists 2017-04-20T18:56:16.851575Z 0 [Warning]
Couldn’t load plugin named ‘blackhole’ with soname ‘ha_blackhole.so’.
2017-04-20T18:56:16.851578Z 0 [ERROR] Function ‘federated’ already
exists 2017-04-20T18:56:16.851579Z 0 [Warning] Couldn’t load plugin
named ‘federated’ with soname ‘ha_federated.so’.
2017-04-20T18:56:16.851582Z 0 [ERROR] Function ‘innodb’ already exists
2017-04-20T18:56:16.851583Z 0 [Warning] Couldn’t load plugin named
‘innodb’ with soname ‘ha_innodb.so’. 2017-04-20T18:56:17.044733Z 0
[Warning] Failed to set up SSL because of the following SSL library
error: SSL context is not usable without certificate and private key
2017-04-20T18:56:17.044754Z 0 [Note] Server hostname (bind-address):
‘0.0.0.0’; port: 3306 2017-04-20T18:56:17.044761Z 0 [Note] –
‘0.0.0.0’ resolves to ‘0.0.0.0’; 2017-04-20T18:56:17.044779Z 0 [Note]
Server socket created on IP: ‘0.0.0.0’. 2017-04-20T18:56:18.483575Z 0
[Note] Event Scheduler: Loaded 0 events 2017-04-20T18:56:18.483706Z 0
[Note] Executing ‘SELECT * FROM INFORMATION_SCHEMA.TABLES;’ to get a
list of tables using the deprecated partition engine. You may use the
startup option ‘–disable-partition-engine-check’ to skip this check.
2017-04-20T18:56:18.483716Z 0 [Note] Beginning of list of non-natively
partitioned tables 2017-04-20T18:56:25.478293Z 0 [Note] InnoDB: Buffer
pool(s) load completed at 170420 13:56:25 2017-04-20T18:56:26.091240Z
0 [Note] End of list of non-natively partitioned tables
2017-04-20T18:56:26.091423Z 0 [Note] /usr/sbin/mysqld: ready for
connections. Version: ‘5.7.17-0ubuntu0.16.04.2’ socket:
‘/var/run/mysqld/mysqld.sock’ port: 3306 (Ubuntu)
2017-04-20T18:56:26.155810Z 4 [ERROR] /usr/sbin/mysqld: Table
‘./example/wp_options’ is marked as crashed and should be repaired
2017-04-20T18:56:26.155889Z 5 [ERROR] /usr/sbin/mysqld: Table
‘./example/wp_options’ is marked as crashed and should be repaired
2017-04-20T18:56:26.156037Z 4 [Warning] Checking table:
‘./example/wp_options’ 2017-04-20T18:56:35.816730Z 4 [ERROR]
/usr/sbin/mysqld: Table ‘./example/wp_usermeta’ is marked as crashed
and should be repaired 2017-04-20T18:56:35.816875Z 4 [Warning]
Checking table: ‘./example/wp_usermeta’

UPDATE: At this point I’ve done the following changes:

  • In my.cnf, Set innodb_buffer_pool_size to 500MB and restarted mysql.
    It still crashed after doing this but it seems like this is wise regardless.
  • I’ve run mysqlcheck -u root -p –auto-repair –optimize –all-databases. things ran well for a day and then crashed again
  • In my.cnf, Decreased mysql max_connections from 151 to
    80 and restarted mysql. This is new and untested so far.
  • Decreased apache MaxRequestWorkers from 150 to 100. Didn’t help. Still crashing.
  • I’ve increased RAM allocated to the server to 5GB. Didn’t help, still crashing.
  • I already had a 1GB Swap file. Left it.
  • My latest suspicion is this is being caused by a bad block.


Get this bounty!!!

#StackBounty: #mysql MySQL Router cannot connect to MySQL after server restart

Bounty: 50

I have such a configuration:

3 MySQL Servers (v.5.7.17):

[A]. 192.168.0.10:3306

[B]. 192.168.0.20:3306

[C]. 192.168.0.30:3306

MySQL Router 2.1 (192.168.0.30:3308)

[router.cnf]

[DEFAULT]
plugin_folder  = /opt/mysql/router2.1.n/lib/mysqlrouter

[logger]
level = DEBUG

[routing]
bind_address = 192.168.0.30:3308
mode = read-write
destinations = 192.168.0.10:3306,192.168.0.20:3306,192.168.0.30:3306

MySQL router works fine if all servers are UP (alive).
But if we restart all the 3 servers (A,B,C) then MySQL Router says:

“Can’t connect to MySQL server” 

Any idea?

P.S. In order to make it work again I have to restart MySQL Router so I am wondering whether MySQL Router can figure out it itself.


Get this bounty!!!

#StackBounty: #mysql #sql Add new columns in one table based on new entries in another table in mysql

Bounty: 50

I have 2 tables: sessions and assignments. This assignments table has a column called scriptname with strings as values. The sessions table has column names equal to scriptname+ the columns id, uid, timein and timeout. As I add new instances to assignments I get new values in the scriptname column which I want to add as new columns to sessions with default values of 0. How do I do this?

What I currently do is drop the table and create a new table based on the scriptname column. The problem is of course I lose all my data.

DROP TABLE sessions;
SET SESSION group_concat_max_len = 1000000;
SELECT
  CONCAT(
    'CREATE TABLE sessions (',
    GROUP_CONCAT(DISTINCT
      CONCAT(scriptname, ' BOOL DEFAULT 0')
      SEPARATOR ','),
    ');')
FROM
  assignments
INTO @sql;

PREPARE stmt FROM @sql;
EXECUTE stmt;

ALTER TABLE sessions
ADD COLUMN `timeout` timestamp not null FIRST,
ADD COLUMN `timein` timestamp not null DEFAULT CURRENT_TIMESTAMP FIRST,
ADD COLUMN `uid` VARCHAR(128) not null FIRST,
ADD COLUMN `id` INT(11) AUTO_INCREMENT PRIMARY KEY not null FIRST;

I hope somebody can help me out as I’m really not an expert on sql! Thanks in advance.


Get this bounty!!!

#StackBounty: #mysql #debian #debian-stretch mysql in debian stretch

Bounty: 300

Debian Stretch will likely be released in the middle of this year.

mysql-server-5.x will no longer be available and is replaced with mariadb-server-10.1. i don’t feel i’m ready for the big step and moving to MariaDB, i’d prefer to stay with mysql 5.6 or – even better – 5.7. what would you recommend – using 5.7 from debian’s unstable repository? going with the oracle-provided packages? some other options?

thanks!


Get this bounty!!!

#StackBounty: #mysql #ftp #authentication #encryption #pureftpd Pure-ftpd with MySQL – Crypt() not logging me in with hashed passwords

Bounty: 100

I am using pure-ftpd with mysql to auth users.

Here is my mysql.conf

MYSQLServer     localhost
MYSQLPort       3306
MYSQLSocket     /var/run/mysqld/mysqld.sock
MYSQLUser       user
MYSQLPassword   pwd
MYSQLDatabase   my_db
MYSQLCrypt      crypt()
MYSQLGetPW      SELECT password FROM ftp_users WHERE login="L"
MYSQLGetUID     SELECT u_id FROM ftp_users WHERE login="L"
MYSQLGetGID     SELECT g_id FROM ftp_users WHERE login="L"
MYSQLGetDir     SELECT dir FROM ftp_users WHERE login="L"
MySQLGetQTAFS   SELECT quota_files FROM ftp_users WHERE login="L"
MySQLGetQTASZ  SELECT quota_size FROM ftp_users WHERE login="L"
MySQLGetRatioUL SELECT ul_ratio FROM ftp_users WHERE login="L"
MySQLGetRatioDL SELECT dl_ratio FROM ftp_users WHERE login="L"
MySQLGetBandwidthUL SELECT ul_bandwidth FROM ftp_users WHERE login="L"
MySQLGetBandwidthDL SELECT dl_bandwidth FROM ftp_users WHERE login="L"

I then have tried restarting pure-ftpd-mysql and pure-ftpd

My table has a field with the pwd as

password    varchar(255)

When I insert a user with a plaintext pwd, I can login fine with both login and password. When I insert a hash with ‘lol’ such as SHA512 or a BCrypt hash. It doesn’t allow me to login with the pwd ‘lol’.

BCrypt $2a$06$JrvxpMAvi6MnRSIvZQMMxOffIDLtEP7lrKNe0k0CTsK51v4zujfpS
SHA512 3DD28C5A23F780659D83DD99981E2DCB82BD4C4BDC8D97A7DA50AE84C7A7229A6DC0AE8AE4748640A4CC07CCC2D55DBDC023A99B3EF72BC6CE49E30B84253DAE

However, if I paste in the hash, it successfully logs in as I assume it takes it as a plaintext value.

I have tried changing the mysql.conf to

MYSQLCrypt      crypt

But this breaks it completely. There are many sites which say to use crypt, but the comments in my configuration file list crypt() as one of the options.

I have read through many posts and forums but the closest thing I found was this, which doesn’t work at all.

http://serverfault.com/a/630806/302696

Here is what pure-ftpd starts with

Starting ftp server: Running: /usr/sbin/pure-ftpd-mysql -l mysql:/etc/pure-ftpd/db/mysql.conf -l puredb:/etc/pure-ftpd/pureftpd.pdb -l puredb:/etc/pure-ftpd/pureftpd.pdb -E -F /etc/pure-ftpd/fortunes.txt -j -H -J ALL:!aNULL:!SSLv3 -u 1000 -8 UTF-8 -A -O clf:/var/log/pure-ftpd/transfer.log -B

So basically it is not using crypt or I am not using it correctly. I though it could handle SHA512 natively with mysql but it doesn’t. Other things I can think of is that I need code with the configuration but I can’t see why it would require anything.


Get this bounty!!!

#StackBounty: #php #mysql Dynamically created multi select box values not inserted correctly

Bounty: 50

<table>
<tr>
    <td><select class="form-control selectpicker" data-live-search="true" name="reson[]" required="required">
            <option>--Select--</option>
            <option value="1">AAA</option>
            <option value="2">BBB</option>
            <option value="3">CCC</option>
            <option value="4">DDD</option>
            <option value="5">EEE</option>
        </select>
    </td>
    <td>
        <select class="form-control selectpicker" data-live-search="true" name="service[]" id="service" multiple="multiple">
            <option>--Select--</option>
            <option value="1">List 1</option>
            <option value="2">List 2</option>
            <option value="3">List 3</option>
            <option value="4">List 4</option>
            <option value="5">List 5</option>
            <option value="6">List 6</option>
        </select>
    </td>
    <td>
        <input type="text" class="form-control" name="name[]" placeholder="Name" />
    </td>
</tr>
<tr>
    <td><select class="form-control selectpicker" data-live-search="true" name="reson[]" required="required">
            <option>--Select--</option>
            <option value="1">AAA</option>
            <option value="2">BBB</option>
            <option value="3">CCC</option>
            <option value="4">DDD</option>
            <option value="5">EEE</option>
        </select>
    </td>
    <td>
        <select class="form-control selectpicker" data-live-search="true" name="service[]" id="service" multiple="multiple">
            <option>--Select--</option>
            <option value="1">List 1</option>
            <option value="2">List 2</option>
            <option value="3">List 3</option>
            <option value="4">List 4</option>
            <option value="5">List 5</option>
            <option value="6">List 6</option>
        </select>
    </td>
    <td>
        <input type="text" class="form-control" name="name[]" placeholder="Name" />
    </td>
</tr>
<tr>
    <td><select class="form-control selectpicker" data-live-search="true" name="reson[]" required="required">
            <option>--Select--</option>
            <option value="1">AAA</option>
            <option value="2">BBB</option>
            <option value="3">CCC</option>
            <option value="4">DDD</option>
            <option value="5">EEE</option>
        </select>
    </td>
    <td>
        <select class="form-control selectpicker" data-live-search="true" name="service[]" id="service" multiple="multiple">
            <option>--Select--</option>
            <option value="1">List 1</option>
            <option value="2">List 2</option>
            <option value="3">List 3</option>
            <option value="4">List 4</option>
            <option value="5">List 5</option>
            <option value="6">List 6</option>
        </select>
    </td>
    <td>
        <input type="text" class="form-control" name="name[]" placeholder="Name" />
    </td>
</tr>
</table>

This table rows are generated dynamicaly, its input field values are passed as an array and one of the select box is multiselect

Here is my php code

<?php
extract($_POST);
foreach ($reson as $id => $value) {
    $resona = ($reson[$id]);
    $namep = ($name[$id]);
    $rsid = $ob->insert_data('tbl_reson',array("reson" => $resona, "name" => $namep), true);
    foreach ($service as $ii => $valu) {
        $r_service = ($service[$ii]);
        $ob->insert_data('tbl_service',array("reson_id" => $rsid, "service" => $r_service));
    }
}
?>

Suppose here we have 3 rows and I select two multiple option from first row and three options from second row and four options from third row.

And when inserted into the DB, the selected options become same for all rows (All the options selected in multiselect are grouped into one array and saved into each field).

  • First table

    ------------------------------
        id  |   resn    |   name
    ------------------------------
        1   |   1       |   Test
        2   |   2       |   aaa
        3   |   3       |   bbb
    ------------------------------
    
  • Second Table

    --------------------------------    
        id  |   resnid  |   service
    --------------------------------
        1   |   1       |       1
        2   |   1       |       2
        3   |   2       |       3
        4   |   2       |       4
        5   |   2       |       5
        6   |   3       |       6
    
  • Second Table Current list

    -------------------------------- 
        id  |   resnid  |   service
    --------------------------------
        1   |   1       |       1
        2   |   1       |       2
        3   |   1       |       3
        4   |   1       |       4
        5   |   1       |       5
        6   |   1       |       6
        7   |   2       |       1
        8   |   2       |       2
        9   |   2       |       3
        10  |   2       |       4
        11  |   2       |       5
        12  |   2       |       6
    
  • etc…

But what I need is to insert reson[] and name[] in one table and service in another table based on last inserted id of the first table

Please Help to achieve this.


Get this bounty!!!

#StackBounty: #java #mysql #apache-spark #jdbc #amazon-s3 Converting mysql table to spark dataset is very slow compared to same from cs…

Bounty: 50

I have csv file in Amazon s3 with is 62mb in size (114 000 rows). I am converting it into spark dataset, and taking first 500 rows from it. Code is as follow;

DataFrameReader df = new DataFrameReader(spark).format("csv").option("header", true);
Dataset<Row> set=df.load("s3n://"+this.accessId.replace(""", "")+":"+this.accessToken.replace(""", "")+"@"+this.bucketName.replace(""", "")+"/"+this.filePath.replace(""", "")+"");

 set.take(500)

The whole operation takes 20 to 30 sec.

Now I am trying the same but rather using csv I am using mySQL table with 119 000 rows. MySQL server is in amazon ec2. Code is as follow;

String url ="jdbc:mysql://"+this.hostName+":3306/"+this.dataBaseName+"?user="+this.userName+"&password="+this.password;

SparkSession spark=StartSpark.getSparkSession();

SQLContext sc = spark.sqlContext();

DataFrameReader df = new DataFrameReader(spark).format("csv").option("header", true);
Dataset<Row> set = sc
            .read()
            .option("url", url)
            .option("dbtable", this.tableName)
            .option("driver","com.mysql.jdbc.Driver")
            .format("jdbc")
            .load();
set.take(500);

This is taking 5 to 10 minutes.
I am running spark inside jvm. Using same configuration in both cases.

My issue is not how to decrease the required time as I know in ideal case spark will run in cluster but what I can not understand is why this big time difference in the above two case?


Get this bounty!!!

#StackBounty: #sql #mysql List users, ordered by accuracy of soccer match predictions

Bounty: 100

I have a database filled with predictions of soccer matches. I need a solution to calculate the rankings from the database. There are 2 rankings: one for the entire season (playday=0) and one for each matchday (called playday in the code).

I have 3 tables:

  1. matches
  2. predictions
  3. predictions_points

To give you a better insight in the database, here’s some example data:

matches: contains soccer matches information.

+----------+--------------+---------------------+------------+-----------+----------------+--------------+-----------------+--------------+-----------------+
| match_id | match_status |   match_datetime    | match_info | league_id | league_playday | home_team_id | home_team_score | away_team_id | away_team_score |
+----------+--------------+---------------------+------------+-----------+----------------+--------------+-----------------+--------------+-----------------+
|        1 |            3 | 2016-07-29 20:30:00 |            |         1 |              1 |            1 |               0 |            2 |               2 |
|        2 |            3 | 2016-07-30 18:00:00 |            |         1 |              1 |            5 |               1 |            4 |               2 |
|        3 |            3 | 2016-07-30 20:00:00 |            |         1 |              1 |            3 |               1 |            6 |               0 |
|        4 |            3 | 2016-07-30 20:00:00 |            |         1 |              1 |            7 |               3 |            8 |               0 |
+----------+--------------+---------------------+------------+-----------+----------------+--------------+-----------------+--------------+-----------------+

predictions: contains users predictions and the amount of points received per prediction.

+---------------+----------+---------+-----------------+-----------------+--------------------+
| prediction_id | match_id | user_id | home_team_score | away_team_score | predictions_points |
+---------------+----------+---------+-----------------+-----------------+--------------------+
|             1 |        1 |       1 |               0 |               1 |                  1 |
|             2 |        2 |       1 |               1 |               2 |                  3 |
|             3 |        3 |       1 |               2 |               0 |                  1 |
|             4 |        4 |       1 |               2 |               0 |                  1 |
|             5 |        1 |       2 |               0 |               2 |                  3 |
|             6 |        2 |       2 |               1 |               2 |                  3 |
|             7 |        3 |       2 |               1 |               0 |                  3 |
|             8 |        4 |       2 |               0 |               0 |                  0 |
+---------------+----------+---------+-----------------+-----------------+--------------------+

predictions_points contains the points per playday (or entire season when playday = 0) and the ranking (which we can not use for the query).

+-----------+---------+-----------+----------------+---------------------+---------------+
| points_id | user_id | league_id | league_playday | league_user_ranking | points_amount |
+-----------+---------+-----------+----------------+---------------------+---------------+
|         1 |       1 |         1 |              0 |                   2 |            51 |
|         2 |       2 |         1 |              0 |                   1 |            59 |
|         3 |       1 |         1 |              1 |                   2 |             6 |
|         4 |       2 |         1 |              1 |                   1 |             9 |
+-----------+---------+-----------+----------------+---------------------+---------------+

If there is a draw (in amount of points between users), I want to order them based on the amount of predictions they had 100% correct (they earned 1 point for a prediction with wrong score but correct win/draw/loss – and earned at least 3 points for a correct score).

(Please note that the league_user_ranking field from the predictions_points table gets updated based on the result set of this query. So we can not use it for the query.)

The following query works, but I feel like there’s room for improvement:

     SELECT *, (
        SELECT COUNT(*) FROM predictions p
            INNER JOIN matches m
            ON m.match_id = p.match_id
        WHERE p.user_id=p_p.user_id 
        AND (m.league_playday=p_p.league_playday OR p_p.league_playday=0)
        AND p.prediction_points>=3
     ) AS correctpredictions_count
     FROM
     predictions_points p_p
     WHERE
     p_p.league_id=:league_id
     ORDER BY
     p_p.league_playday ASC, p_p.points_amount DESC, correctpredictions_count DESC

UPDATE/EDIT: I see that my question got bumped to the homepage. I am live-testing the code with 15 other soccer enthousiasts based on the results of the current Belgian soccer season. At the moment, this query takes about 10 seconds on a database with 3000 predictions (15 users, 8 matches per playday, 30 playdays) on a Raspberry Pi 3 running Raspbian Lite.

Expected result set:

    +-----------+---------+-----------+----------------+---------------------+---------------+--------------------------+
    | points_id | user_id | league_id | league_playday | league_user_ranking | points_amount | correctpredictions_count |
    +-----------+---------+-----------+----------------+---------------------+---------------+--------------------------+
    |         2 |       2 |         1 |              0 |                   1 |            59 |                        7 |
    |         1 |       1 |         1 |              0 |                   2 |            51 |                        6 |
    |         4 |       2 |         1 |              1 |                   1 |             9 |                        2 |
    |         3 |       1 |         1 |              1 |                   2 |             6 |                        1 |
    |         5 |       1 |         1 |              2 |                   1 |             7 |                        2 |
    |         6 |       2 |         1 |              2 |                   2 |             7 |                        1 |
    +-----------+---------+-----------+----------------+---------------------+---------------+--------------------------+


Get this bounty!!!

#StackBounty: #mysql #mariadb #sharding #maxscale #proxysql ProxySQL equivalent of MaxScale schemarouter

Bounty: 50

background

My employer developed a web application we provide on software-as-a-service terms to our customers. To allow for multiple customers with a huge mass of data to be stored in a database, we chose to let the application create a schema per tenant. So if we had 5 customers we had something along the lines of

mysql> show schemas;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| tenant_1           |
| tenant_2           |
| tenant_3           |
| tenant_4           |
| tenant_5           |
+--------------------+

what we have/do now

For now we are good with this and only run a mariadb galera cluster of three nodes behind a maxscale readconnroute balancer. But we will eventually hit a barrier, where adding nodes to this cluster won’t do, because the overall data size won’t fit on disk and/or the amount of tables will kill performance.

To keep the complexity of the applications database layer low, our devs would like us to handle the routing transparent from the viewpoint of the application: they want the application to just to talk to one “server” and not care about where which tenant is located physically.

To expand our application cluster to multiple mariadb galera clusters we could use maxscales schemarouter which exposes all schemas on all connected sub-clusters as if there was only one server. This fits perfectly into our devs expectations.

Now, a few months ago ProxySQL entered the scenery of database proxies and claims better performance paired with greater flexibility among other stuff.

actual question

We can route queries based on hard-coded schema names, but would refrain from doing so as this would mean to create/update them each time a tenant is created/deleted.

How could we replicate the dynamic behaviour of maxscales schemarouter with proxysql query rules, if at all?


Get this bounty!!!

#StackBounty: Repeating the same function in a query

Bounty: 100

In the below query, there are repeated calculations such as the three calls to SUM(p.amount). Does MySQL re-calculate for each function call or is there some kind of memoization optimization under the hood? If not, how can this kind of query be optimized for maximum performance?

It seems like it would be faster after the first calculation to get the next one by the alias name, total_payemnts, but that just throws an error.

SELECT LEFT(c.last_name, 1) AS 'last_names',
    SUM(p.amount) AS 'total_payments', 
    COUNT(p.rental_id) AS 'num_rentals',
    SUM(p.amount) / COUNT(p.rental_id) AS 'avg_pay'
FROM customer c
JOIN payment p ON p.customer_id = c.customer_id
GROUP BY LEFT(c.last_name, 1)
ORDER BY SUM(p.amount) DESC;

This query runs on the MySQL Sakila sample database.


Get this bounty!!!