#StackBounty: #mysql mysql copy and insert rows in same table with new primary key and sequnce number

Bounty: 50

I have multiple table with primary key, data columns and batch no.

I require procedure which dynamically create statement to copy rows from table and insert into same table by changing auto increment value batch no.

Original Table

enter image description here

mysql query should copy records of batch 1 and paste them as batch 10 as below

enter image description here

I have tried to write query but it is coping all records, also it gets failed timeout for large no of records.

CREATE PROCEDURE `duplicateRows`(_schemaName text, _tableName text, _omitColumns text, var_run_seq_no int,var_old_run_seq_no int)
    BEGIN
SELECT IF(TRIM(_omitColumns) <> '', CONCAT('id', ',', TRIM(_omitColumns),'batch_no'), CONCAT('id',',batch_no')) INTO @omitColumns;

SELECT GROUP_CONCAT(COLUMN_NAME) FROM information_schema.columns 
WHERE 
table_schema = _schemaName 
AND 
table_name = _tableName 
AND 
FIND_IN_SET(COLUMN_NAME,@omitColumns) = 0 
ORDER BY ORDINAL_POSITION INTO @columns;

SET @sql = CONCAT('INSERT INTO ', _tableName, '(', @columns, ',batch_no)',
  'SELECT ', @columns, ',', var_batch_no,
  ' FROM ', _schemaName, '.', _tableName);

-- select @sql;
PREPARE stmt1 FROM @sql;
EXECUTE stmt1;
END


Get this bounty!!!

#StackBounty: #windows #mysql #python #openssl Python/MySQL on Windows. Problems with openssl

Bounty: 100

I have a MySQL Server set up to use SSL and I also have the CA Certificate.

When I connect to the server using MySQL Workbench, I do not need the certificate. I can also connect to the server using Python and MySQLdb on a Mac without the CA-certificate.

But when I try to connect using the exact same setup of Python and MySQLdb on a windows machine, I get access denied. It appears that I need the CA. And when I enter the CA, I get the following error

_mysql_exceptions.OperationalError: (2026, 'SSL connection error')

My code to open the connection is below:

db = MySQLdb.connect(host="host.name",    
                 port=3306,
                 user="user",         
                 passwd="secret_password",  
                 db="database", 
                 ssl={'ca': '/path/to/ca/cert'})  

Could anyone point out what the problem is on a windows? I believe it has to do with OpenSSL and Python. I installed OpenSSL from here. But I don’t think Python is using this version that I installed, since when I print the version using Python, it’s not the same.

This is what Python prints. It is still not a very old version and should have worked when connecting to MySQL

OpenSSL 1.0.2j 26 Sep 2016

I am really not used to having to work with OpenSSL and its issues. I’ve literally tried all the solutions found on google by searching the error I get, and you would think one of them should work. But none did and hence I’m guessing the problem is with the OpenSSL and Python on my system. Anyone know how I should try to at least identify the exact problem?

I also do not understand how I can connect to the MySQL Server without the CA Certificate using a Mac/Python or MySQL Workbench, but I get an access denied error in Windows using Python :/

UPDATE:

Python version 2.7.13

MySQL Server Enterprise version 5.7.18


Get this bounty!!!

#StackBounty: #mysql #foreign-key #constraint MySQL complains key exists but I can't find it

Bounty: 100

I have a large (in both schema and data) MySQL database with lots of foreign key constrains. Recently I have discovered that some script can not create a table because key with given name already exists.

I traced down the problem to the following:

If I run something like this:

CREATE TABLE `foo` (
  `bar` int UNSIGNED NOT NULL,
  CONSTRAINT `BAZ` FOREIGN KEY (`bar`) REFERENCES `qux` (`bar`) ON DELETE CASCADE ON UPDATE CASCADE
);

I’m getting:

ERROR 1022 (23000): Can’t write; duplicate key in table ‘foo’

But if I:

SELECT *
  FROM information_schema.REFERENTIAL_CONSTRAINTS
 WHERE CONSTRAINT_SCHEMA = "my_db"
   AND CONSTRAINT_NAME LIKE "BAZ";

I’m getting an empty set.

I have also tried to dump the schema and search for “BAZ” there but found nothing.

Creating a table naming foreign key anything but “BAZ” goes through.

How could it be?


Get this bounty!!!

#StackBounty: #mysql #foreign-key #constraint MySQL complains key exists but I can't find it

Bounty: 100

I have a large (in both schema and data) MySQL database with lots of foreign key constrains. Recently I have discovered that some script can not create a table because key with given name already exists.

I traced down the problem to the following:

If I run something like this:

CREATE TABLE `foo` (
  `bar` int UNSIGNED NOT NULL,
  CONSTRAINT `BAZ` FOREIGN KEY (`bar`) REFERENCES `qux` (`bar`) ON DELETE CASCADE ON UPDATE CASCADE
);

I’m getting:

ERROR 1022 (23000): Can’t write; duplicate key in table ‘foo’

But if I:

SELECT *
  FROM information_schema.REFERENTIAL_CONSTRAINTS
 WHERE CONSTRAINT_SCHEMA = "my_db"
   AND CONSTRAINT_NAME LIKE "BAZ";

I’m getting an empty set.

I have also tried to dump the schema and search for “BAZ” there but found nothing.

Creating a table naming foreign key anything but “BAZ” goes through.

How could it be?


Get this bounty!!!

#StackBounty: #mysql #foreign-key #constraint MySQL complains key exists but I can't find it

Bounty: 100

I have a large (in both schema and data) MySQL database with lots of foreign key constrains. Recently I have discovered that some script can not create a table because key with given name already exists.

I traced down the problem to the following:

If I run something like this:

CREATE TABLE `foo` (
  `bar` int UNSIGNED NOT NULL,
  CONSTRAINT `BAZ` FOREIGN KEY (`bar`) REFERENCES `qux` (`bar`) ON DELETE CASCADE ON UPDATE CASCADE
);

I’m getting:

ERROR 1022 (23000): Can’t write; duplicate key in table ‘foo’

But if I:

SELECT *
  FROM information_schema.REFERENTIAL_CONSTRAINTS
 WHERE CONSTRAINT_SCHEMA = "my_db"
   AND CONSTRAINT_NAME LIKE "BAZ";

I’m getting an empty set.

I have also tried to dump the schema and search for “BAZ” there but found nothing.

Creating a table naming foreign key anything but “BAZ” goes through.

How could it be?


Get this bounty!!!

#StackBounty: #python #mysql #django How to concat two columns of table django model

Bounty: 50

I am implementing search in my project what I want is to concat to column in where clause to get results from table.

Here is what I am doing:

from django.db.models import Q

if 'search[value]' in request.POST and len(request.POST['search[value]']) >= 3:
    search_value = request.POST['search[value]'].strip()

    q.extend([
        Q(id__icontains=request.POST['search[value]']) |
        (Q(created_by__first_name=request.POST['search[value]']) & Q(created_for=None)) |
        Q(created_for__first_name=request.POST['search[value]']) |
        (Q(created_by__last_name=request.POST['search[value]']) & Q(created_for=None)) |
        Q(created_for__last_name=request.POST['search[value]']) |
        (Q(created_by__email__icontains=search_value) & Q(created_for=None)) |
        Q(created_for__email__icontains=search_value) |
        Q(ticket_category=request.POST['search[value]']) |
        Q(status__icontains=request.POST['search[value]']) |
        Q(issue_type__icontains=request.POST['search[value]']) |
        Q(title__icontains=request.POST['search[value]']) |
        Q(assigned_to__first_name__icontains=request.POST['search[value]']) |

    ])

Now I want to add another OR condition like:

CONCAT(' ', created_by__first_name, created_by__last_name) like '%'search_value'%

But when I add this condition to the queryset it becomes AND

where = ["CONCAT_WS(' ', profiles_userprofile.first_name, profiles_userprofile.last_name) like '"+request.POST['search[value]']+"' "]
            tickets = Ticket.objects.get_active(u, page_type).filter(*q).extra(where=where).exclude(*exq).order_by(*order_dash)[cur:cur_length]

How do I convert this into an OR condition?


Get this bounty!!!

#StackBounty: #mysql #r #jdbc #timeout #dbi Is there a way to timeout a MySql query when using DBI and dbGetQuery?

Bounty: 50

I realize that

dbGetQuery comes with a default implementation that calls dbSendQuery, then dbFetch, ensuring that the result is always freed by dbClearResult.

and

dbClearResult frees all resources (local and remote) associated with a result set. In some cases (e.g., very large result sets) this can be a critical step to avoid exhausting resources (memory, file descriptors, etc.)

But my team just experienced a locked table that we went into MySQL to kill pid and I’m wondering – is there a way to timeout a query submitted using the DBI package?

I’m looking for and can’t find the equivalent of

dbGetQuery(conn = connection, 'select stuff from that_table', timeout = 90)

I tried this, and profiled the function with and without the parameter set and it doesn’t appear it does anything; why would it, if dbClearResult is always in play?


Get this bounty!!!

#StackBounty: #php #mysql #database #rest #codeigniter Create API endpoint for fetching dynamic data based on time

Bounty: 50

I have a scraper which periodically scrapes articles from news sites and stores them in a database [MYSQL].
The way the scraping works is that the oldest articles are scraped first and then i move onto much more recent articles.

For example an article that was written on the 1st of Jan would be scraped first and given an ID 1 and an article that was scraped on the 2nd of Jan would have an ID 2.

So the recent articles would have a higher id as compared to older articles.

There are multiple scrapers running at the same time.

Now i need an endpoint which i can query based on timestamp of the articles and i also have a limit of 10 articles on each fetch.

The problem arises for example when there are 20 articles which were posted with a timestamp of 1499241705 and when i query the endpoint with a timestamp of 1499241705 a check is made to give me all articles that is >=1499241705 in which case i would always get the same 10 articles each time,changing the condition to a > would mean i skip out on the articles from 11-20. Adding another where clause to check on id is unsuccessful because articles may not always be inserted in the correct date order as the scraper is running concurrently.

Is there a way i can query this end point so i can always get consistent data from it with the latest articles coming first and then the older articles.

EDIT:

   +-----------------------+
   |   id | unix_timestamp |
   +-----------------------+
   |    1 |   1000         |
   |    2 |   1001         |
   |    3 |   1002         |
   |    4 |   1003         |
   |   11 |   1000         |
   |   12 |   1001         |
   |   13 |   1002         |
   |   14 |   1003         |
   +-----------------------+

The last timestamp and ID is being sent through the WHERE clause.

E.g.
$this->db->where('unix_timestamp <=', $timestamp);
$this->db->where('id <', $offset);
$this->db->order_by('unix_timestamp ', 'DESC');
$this->db->order_by('id', 'DESC');

On querying with a timestamp of 1003, ids 14 and 4 are fetched. But then during the next call, id 4 would be the offset thereby not fetching id 13 and only fetching id 3 the next time around.So data would be missing .


Get this bounty!!!

#StackBounty: #sql #mysql #sync Query to determine insert / update operations to synchronize data

Bounty: 100

I’ve been developing a system where the mobile syncs data from the server.The phone can run offline but when connected to the internet it needs take care of inserted or updated data.

Here is the output result (this is just for demonstration purposes it will be changed a bit):

[
    {
        "version_send": "1",
        "action_peformed": "insert,update",
        "version": "1,4",
        "change_date": "2017-06-22 16:42:03",
        "audit_name": "Push Ups",
        "current_name": "Pushup",
        "id_exercise": "1",
        "action_peformed_": "update"
    },
    {
        "version_send": "1",
        "action_peformed": "insert",
        "version": "1",
        "change_date": "2017-06-22 16:42:06",
        "audit_name": "Squat",
        "current_name": "Squat",
        "id_exercise": "2",
        "action_peformed_": "igonre"
    },
    {
        "version_send": "1",
        "action_peformed": "insert",
        "version": "1",
        "change_date": "2017-06-22 16:42:09",
        "audit_name": "Chin Ups",
        "current_name": "Chin Ups",
        "id_exercise": "3",
        "action_peformed_": "igonre"
    },
    {
        "version_send": "1",
        "action_peformed": "insert,update",
        "version": "2,3",
        "change_date": "2017-06-22 16:44:25",
        "audit_name": "Pull Ups",
        "current_name": "Pull Up",
        "id_exercise": "4",
        "action_peformed_": "insert"
    },
    {
        "version_send": "1",
        "action_peformed": "insert",
        "version": "2",
        "change_date": "2017-06-22 16:45:08",
        "audit_name": "Sit Up",
        "current_name": "Sit Up",
        "id_exercise": "5",
        "action_peformed_": "insert"
    },
    {
        "version_send": "1",
        "action_peformed": "insert,update,update",
        "version": "2,3,4",
        "change_date": "2017-06-22 16:45:28",
        "audit_name": "Pike Push Up",
        "current_name": "Pike Pushups",
        "id_exercise": "6",
        "action_peformed_": "insert"
    }
]

The goal here was to determine if the phones version if for update or for insert of the new data.

For example:

{
        "version_send": "1",
        "action_peformed": "insert,update,update",
        "version": "2,3,4",
        "change_date": "2017-06-22 16:45:28",
        "audit_name": "Pike Push Up",
        "current_name": "Pike Pushups",
        "id_exercise": "6",
        "action_peformed_": "insert"
    }

The data was “action_peformed”: “insert,update,update” but the phone has the version 1, and the first insert on that data was from version two so the output action be insert.

But if the phone version was 2 it means that the action should be insert..
You get the point.

The current query that does this looks like this:

SELECT 
        :version as `version_send`,
        GROUP_CONCAT(ea.`action_peformed` ORDER BY ea.`action_peformed` ASC) as `action_peformed`,
        GROUP_CONCAT(ea.`version` ORDER BY ea.`version` ASC) as `version`,
        ea.change_date,
        ea.name as audit_name,
        e.name as current_name,
        e.id_exercise ,
        ##BEGIN Determine what should be done
        IF (:version < MAX(ea.`version`),
           IF(
                FIND_IN_SET (:version, 
                    GROUP_CONCAT( DISTINCT ea.`version` SEPARATOR ',' )  
                    ), 
                'update',##'is less than max but has one or more versions - update',

                IF (:version > MIN(ea.`version`),
                'update',##'is less than max but has one or more versions - update',
                'insert'##'is less than max but has no versions - insert'
                )
            )
        ,'igonre') as `action_peformed_`
        ##END Determine what should be done*/
        FROM exercise_audit AS ea LEFT JOIN exercise AS e ON e.id_exercise = ea.reference_id GROUP BY e.id_exercise

And here are the tables that i used:

CREATE TABLE `exercise` (
  `id_exercise` int(11) NOT NULL,
  `name` varchar(45) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `exercise_audit` (
  `id_exercise_audit` int(11) NOT NULL,
  `name` varchar(45) NOT NULL COMMENT 'The name will take the new value if inserted ,or the old value if edited.And the latest value if deleted.',
  `action_peformed` varchar(45) NOT NULL,
  `version` int(11) NOT NULL,
  `reference_id` int(11) NOT NULL,
  `change_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `version` (
  `id_version` int(11) NOT NULL,
  `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `state` enum('C','P') NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Showing rows 0 – 5 (6 total, Query took 0.0010 seconds.)

I feel that the query is missing something, how could i improve it?

I someone is interested i can share the code over github, if it will be easier.


Get this bounty!!!

#StackBounty: #mysql #base64 Need to group by a segment of an encoded string in mysql

Bounty: 50

I have a field in my database which is encoded. After using from_base64 on the field it looks like this:

<string>//<string>//<string>/2017//06//21//<string>//file.txt

There may be an undetermined number of strings at the beginning of the path, however, the date (YYYY//MM//DD) will always have two fields to the right (a string followed by file extension).

I want to sort by this YYYY//MM//DD pattern and get a count for all paths with this date.

So basically I want to do this:

select '<YYYY//MM//DD portion of decoded_path>', count(*) from table group by '<YYYY//MM//DD portion of decoded_path>' order by '<YYYY//MM//DD portion of decoded_path>';


Get this bounty!!!