#StackBounty: #18.04 #python #python3 #sqlite #sqlite3 Is it possible to install SQLite 3.24+ on Ubuntu 18.04?

Bounty: 50

I am trying to update my SQLite version to 3.24 or above so that a Python app can make use of the new “UPSERT” queries. I have been trying for a few hours to do this with little success; it refuses to update past 3.22.

I have attempted:

  • Using apt to install and reinstall sqlite / libsqlite3-dev (and various versions of this)

  • Downloading packages from launchpad (such as
    https://launchpad.net/ubuntu/+source/sqlite3/3.26.0-2) and attempting to install them

  • Using Python pip to try and update sqlite3

  • Adding a few PPA repos to try and grab it from there

  • Other various suggestions found from google

What I have not tried:

  • Building SQLite from source (this is a bit of a last resort for me)

Is it possible to install a version of SQLite 3.24+ on Ubuntu 18.04? If so, is the only way to build from source or is there an easy way to pick up a more recent version through apt (or similar)?


Get this bounty!!!

#StackBounty: #linux #bash #centos #python3 #sqlite ModuleNotFoundError: No module named '_sqlite3'

Bounty: 50

We have different python version installed and specifically using python3.7 so I have edited my .bashrc file. We are using Centos7 with Linux server.

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
alias python=python3.7
alias pip=pip3.7


[xyz@innolx20122 ~]$ python
python             python2.7          python3.6          python3.7          python3.7m-config
python2            python3            python3.6m         python3.7m

[xyz@innolx20122 ~]$ which sqlite3
/usr/bin/sqlite3

Its working with python2.7 and python3.6 version

[xyz@innolx20122 ~]$ python2.7
Python 2.7.5 (default, Apr  2 2020, 13:16:51)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3

[xyz@innolx20122 ~]$ python3.6
Python 3.6.8 (default, Apr  2 2020, 13:34:55)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3

It’s not working with python3.7 version

[xyz@innolx20122 ~]$ python3.7
Python 3.7.0 (default, Sep  3 2020, 09:25:25)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.7/sqlite3/__init__.py", line 23, in <module>
    from sqlite3.dbapi2 import *
  File "/usr/local/lib/python3.7/sqlite3/dbapi2.py", line 27, in <module>
    from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'

Update-

We installed python following instruction in the below link
Python3.7 installation link

Hence my python3.7 version is installed at root level

[root@innolx20122 ~]# ls
anaconda-ks.cfg  Python-3.7.0  Python-3.7.0.tgz
[root@innolx20122 ~]# cd Python-3.7.0
[root@innolx20122 Python-3.7.0]# ls
aclocal.m4    config.status  Doc         Lib              Mac              Misc     PC              pyconfig.h     python-config     setup.py
build         config.sub     Grammar     libpython3.7m.a  Makefile         Modules  PCbuild         pyconfig.h.in  python-config.py  Tools
config.guess  configure      Include     LICENSE          Makefile.pre     Objects  Programs        python         python-gdb.py
config.log    configure.ac   install-sh  m4               Makefile.pre.in  Parser   pybuilddir.txt  Python         README.rst

I saw one link on stack overflow suggesting some workaround.
fix Sqlite3 issue

Kindly suggest if it’s ok to run below commands from same root directory itself

yum install sqlite-devel

./configure
make && make altinstall


Get this bounty!!!

#StackBounty: #linux #bash #centos #python3 #sqlite ModuleNotFoundError: No module named '_sqlite3'

Bounty: 50

We have different python version installed and specifically using python3.7 so I have edited my .bashrc file. We are using Centos7 with Linux server.

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
alias python=python3.7
alias pip=pip3.7


[xyz@innolx20122 ~]$ python
python             python2.7          python3.6          python3.7          python3.7m-config
python2            python3            python3.6m         python3.7m

[xyz@innolx20122 ~]$ which sqlite3
/usr/bin/sqlite3

Its working with python2.7 and python3.6 version

[xyz@innolx20122 ~]$ python2.7
Python 2.7.5 (default, Apr  2 2020, 13:16:51)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3

[xyz@innolx20122 ~]$ python3.6
Python 3.6.8 (default, Apr  2 2020, 13:34:55)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3

It’s not working with python3.7 version

[xyz@innolx20122 ~]$ python3.7
Python 3.7.0 (default, Sep  3 2020, 09:25:25)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.7/sqlite3/__init__.py", line 23, in <module>
    from sqlite3.dbapi2 import *
  File "/usr/local/lib/python3.7/sqlite3/dbapi2.py", line 27, in <module>
    from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'

Update-

We installed python following instruction in the below link
Python3.7 installation link

Hence my python3.7 version is installed at root level

[root@innolx20122 ~]# ls
anaconda-ks.cfg  Python-3.7.0  Python-3.7.0.tgz
[root@innolx20122 ~]# cd Python-3.7.0
[root@innolx20122 Python-3.7.0]# ls
aclocal.m4    config.status  Doc         Lib              Mac              Misc     PC              pyconfig.h     python-config     setup.py
build         config.sub     Grammar     libpython3.7m.a  Makefile         Modules  PCbuild         pyconfig.h.in  python-config.py  Tools
config.guess  configure      Include     LICENSE          Makefile.pre     Objects  Programs        python         python-gdb.py
config.log    configure.ac   install-sh  m4               Makefile.pre.in  Parser   pybuilddir.txt  Python         README.rst

I saw one link on stack overflow suggesting some workaround.
fix Sqlite3 issue

Kindly suggest if it’s ok to run below commands from same root directory itself

yum install sqlite-devel

./configure
make && make altinstall


Get this bounty!!!

#StackBounty: #linux #bash #centos #python3 #sqlite ModuleNotFoundError: No module named '_sqlite3'

Bounty: 50

We have different python version installed and specifically using python3.7 so I have edited my .bashrc file. We are using Centos7 with Linux server.

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
alias python=python3.7
alias pip=pip3.7


[xyz@innolx20122 ~]$ python
python             python2.7          python3.6          python3.7          python3.7m-config
python2            python3            python3.6m         python3.7m

[xyz@innolx20122 ~]$ which sqlite3
/usr/bin/sqlite3

Its working with python2.7 and python3.6 version

[xyz@innolx20122 ~]$ python2.7
Python 2.7.5 (default, Apr  2 2020, 13:16:51)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3

[xyz@innolx20122 ~]$ python3.6
Python 3.6.8 (default, Apr  2 2020, 13:34:55)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3

It’s not working with python3.7 version

[xyz@innolx20122 ~]$ python3.7
Python 3.7.0 (default, Sep  3 2020, 09:25:25)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.7/sqlite3/__init__.py", line 23, in <module>
    from sqlite3.dbapi2 import *
  File "/usr/local/lib/python3.7/sqlite3/dbapi2.py", line 27, in <module>
    from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'

Update-

We installed python following instruction in the below link
Python3.7 installation link

Hence my python3.7 version is installed at root level

[root@innolx20122 ~]# ls
anaconda-ks.cfg  Python-3.7.0  Python-3.7.0.tgz
[root@innolx20122 ~]# cd Python-3.7.0
[root@innolx20122 Python-3.7.0]# ls
aclocal.m4    config.status  Doc         Lib              Mac              Misc     PC              pyconfig.h     python-config     setup.py
build         config.sub     Grammar     libpython3.7m.a  Makefile         Modules  PCbuild         pyconfig.h.in  python-config.py  Tools
config.guess  configure      Include     LICENSE          Makefile.pre     Objects  Programs        python         python-gdb.py
config.log    configure.ac   install-sh  m4               Makefile.pre.in  Parser   pybuilddir.txt  Python         README.rst

I saw one link on stack overflow suggesting some workaround.
fix Sqlite3 issue

Kindly suggest if it’s ok to run below commands from same root directory itself

yum install sqlite-devel

./configure
make && make altinstall


Get this bounty!!!

#StackBounty: #python #sqlite #blob #zodb #relstorage How are blobs removed in RelStorage pack?

Bounty: 250

This question is related to How to pack blobstorage with Plone and RelStorage

Using zodb database with RelStorage and sqlite as its backend I am trying to remove unused blobs. Currently db.pack does not remove the blobs from disc. The minimum working example below demonstrates this behavior:

import logging
import numpy as np
import os
import persistent
from persistent.list import PersistentList
import shutil
import time
from ZODB import config, blob

connectionString = """
%import relstorage
<zodb main>
<relstorage>
blob-dir ./blob
keep-history false
cache-local-mb 0
<sqlite3>
    data-dir .
</sqlite3>
</relstorage>
</zodb>
"""


class Data(persistent.Persistent):
    def __init__(self, data):
        super().__init__()

        self.children = PersistentList()

        self.data = blob.Blob()
        with self.data.open("w") as f:
            np.save(f, data)


def main():
    logging.basicConfig(level=logging.INFO)
    # Initial cleanup
    for f in os.listdir("."):
        if f.endswith("sqlite3"):
            os.remove(f)

    if os.path.exists("blob"):
        shutil.rmtree("blob", True)

    # Initializing database
    db = config.databaseFromString(connectionString)
    with db.transaction() as conn:
        root = Data(np.arange(10))
        conn.root.Root = root

        child = Data(np.arange(10))
        root.children.append(child)

    # Removing child reference from root
    with db.transaction() as conn:
        conn.root.Root.children.pop()

    db.close()

    print("blob directory:", [[os.path.join(rootDir, f) for f in files] for rootDir, _, files in os.walk("blob") if files])
    db = config.databaseFromString(connectionString)
    db.pack(time.time() + 1)
    db.close()
    print("blob directory:", [[os.path.join(rootDir, f) for f in files] for rootDir, _, files in os.walk("blob") if files])


if __name__ == "__main__":
    main()

The example above does the following:

  1. Remove any previous database in the current directory along with the blob directory.
  2. Create a database/storage from scratch adding two objects (root and child), while child is referenced by root and perform a transaction.
  3. Remove the linkage from root to child and perform a transaction.
  4. Close the database/storage
  5. Open the database/storage and perform db.pack for one second in the future.

The output of the minimum working example is the following:

INFO:ZODB.blob:(23376) Blob directory '<some path>/blob/' does not exist. Created new directory.
INFO:ZODB.blob:(23376) Blob temporary directory './blob/tmp' does not exist. Created new directory.
blob directory: [['blob/.layout'], ['blob/3/.lock', 'blob/3/0.03da352c4c5d8877.blob'], ['blob/6/.lock', 'blob/6/0.03da352c4c5d8877.blob']]
INFO:relstorage.storage.pack:pack: beginning pre-pack
INFO:relstorage.storage.pack:Analyzing transactions committed Thu Aug 27 11:48:17 2020 or before (TID 277592791412927078)
INFO:relstorage.adapters.packundo:pre_pack: filling the pack_object table
INFO:relstorage.adapters.packundo:pre_pack: Filled the pack_object table
INFO:relstorage.adapters.packundo:pre_pack: analyzing references from 7 object(s) (memory delta: 256.00 KB)
INFO:relstorage.adapters.packundo:pre_pack: objects analyzed: 7/7
INFO:relstorage.adapters.packundo:pre_pack: downloading pack_object and object_ref.
INFO:relstorage.adapters.packundo:pre_pack: traversing the object graph to find reachable objects.
INFO:relstorage.adapters.packundo:pre_pack: marking objects reachable: 4
INFO:relstorage.adapters.packundo:pre_pack: finished successfully
INFO:relstorage.storage.pack:pack: pre-pack complete
INFO:relstorage.adapters.packundo:pack: will remove 3 object(s)
INFO:relstorage.adapters.packundo:pack: cleaning up
INFO:relstorage.adapters.packundo:pack: finished successfully
blob directory: [['blob/.layout'], ['blob/3/.lock', 'blob/3/0.03da352c4c5d8877.blob'], ['blob/6/.lock', 'blob/6/0.03da352c4c5d8877.blob']]

As you can see db.pack does remove 3 objects "will remove 3 object(s)" but the blobs in the file system are unchanged.

In the unit tests of RelStorage it appears that they do test if the blobs are removed from the file system (see here), but in the script above it does not work.

What am I doing wrong? Any hint/link/help is appreciated.


Get this bounty!!!

#StackBounty: #sqlite How to automatically rename columns from same table join?

Bounty: 100

When I join two tables to avoid conflicts I use column aliases. But that is prone to errors if there a lot of columns. Also, some ORMS require to use hardcoded or dynamic prefixes to column names. Is there automatic way to rename columns in such a way that all columns from T22 start with “c_t22_” and all columns from T23 start with “c_t23_”?

select T1.id,
       T1.p1, T21.name as p1_name,
       T1.p2, T22.name as p2_name,
       T1.p3, T23.name as p3_name
from T1
join T2 as T21 on T1.p1 = T21.id,
join T2 as T22 on T1.p2 = T22.id,
join T2 as T23 on T1.p3 = T23.id


Get this bounty!!!

#StackBounty: #php #laravel #sqlite Laravel 7, SQLSTATE[23000]: Integrity constraint violation: 19 NOT NULL constraint failed when tryi…

Bounty: 50

I have three tables, User, Company and Department, with their respective models and factories.

I created a test where I’m adding the relationship:

// MyTest.php
$user = factory(User::class)->create();

$company = factory(Company::class)->make();
$company->user()->associate($user);
$company->create(); // it fails here because of NOT NULL constraint, companies.user_id

$department = factory(Department::class)->make();
$department->company()->associate($company);
$department->create();

I get the following error: Integrity constraint violation: 19 NOT NULL constraint failed: companies.user_id (SQL: insert into "companies" ("updated_at", "created_at") values (2020-03-10 07:27:51, 2020-03-10 07:27:51))

My table schema is defined like this:

// users
Schema::create('users', function (Blueprint $table) {
    $table->id();
    $table->string('name');
    $table->string('email')->unique();
    $table->timestamp('email_verified_at')->nullable();
    $table->string('phone');
    $table->integer('user_type');
    $table->string('password');
    $table->rememberToken();
    $table->timestamps();
});

// companies
Schema::create('companies', function (Blueprint $table) {
    $table->id();
    $table->foreignId('user_id')->constrained()->onDelete('cascade');
    $table->string('name');
    $table->string('contact_email');
    $table->string('contact_phone');
    $table->timestamps();
});

// departments
Schema::create('departments', function (Blueprint $table) {
    $table->id();
    $table->foreignId('company_id')->constrained()->onDelete('cascade');
    $table->string('name');
    $table->string('contact_email');
    $table->string('contact_phone');
    $table->timestamps();
});

It is my understanding that there should be no NULL-values in SQL-tables, which is why I am deliberately trying to avoid ->nullable() in my migrations. Especially for foreign keys like these.

EDIT:

I tried doing it this way, I also made a pivot table for users_companies. Now I can attach a company, but I’m still getting an SQL-error when doing the test this way:

$user = factory(User::class)->create();
$company = factory(Company::class)->create();

$user->companies()->attach($company);
$company->departments()->create([
    'name' => 'Department 1',
    'contact_email' => 'department1@example.test',
    'contact_phone' => '95281000',
]);

This also fails with the error stated below:

$company = factory(Company::class)->create();
$company->departments()->save(factory(Department::class)->make());

The error is this: Integrity constraint violation: 19 NOT NULL constraint failed: departments.company_id (SQL: insert into "departments" ("name", "contact_email", "contact_phone", "company_id", "updated_at", "created_at") values (Department 1, department1@example.test, '123456789', ?, 2020-03-11 07:59:31, 2020-03-11 07:59:31)).

CompanyFactory.php

<?php

/** @var IlluminateDatabaseEloquentFactory $factory */

use AppCompany;
use FakerGenerator as Faker;

$factory->define(Company::class, function (Faker $faker) {
    return [
        'name' => 'Company 1',
        'contact_email' => 'company@example.test',
        'contact_phone' => '123456789',
    ];
});

Factories

DepartmentFactory.php

<?php

/** @var IlluminateDatabaseEloquentFactory $factory */

use AppDepartment;
use FakerGenerator as Faker;

$factory->define(Department::class, function (Faker $faker) {
    return [
        'name' => 'Department 1',
        'contact_email' => 'department1@example.test',
        'contact_phone' => '123456789',
    ];
});


Get this bounty!!!

#StackBounty: #android #sqlite #android-room #database-performance One big database versus many small databases

Bounty: 50

My App deals with several similar datasets. That is, they are stored in the same tables, but different data. The user may create more datasets. In any case, these datasets are guaranteed to be disjunct. There will never be any data in one dataset linked somehow to data in another dataset.

I was wondering, would it be better to have a dedicated database for each dataset instead of having all the data in one big database?

I would expect lookup times to improve, if the user works on a smaller database. Is there a rule of thumb, how many entries a database (or table) can hold before I should worry about lookup times?

One drawback I can think of is that opening a database creates some overhead. However, I don’t expect the user to switch datasets frequently.


Get this bounty!!!

#StackBounty: #sql-server #linked-server #sqlite Column length error querying SQLite via SQL Server Linked Server

Bounty: 100

I am attempting to query SQLite to copy data into corresponding tables in SQL Server. This is the first stage of an ETL process I’m putting together.

Windows 10 Pro, SQL Server 2017 Developer Edition, SQLite 3.30.1 (installed via Chocolatey)

I have created a 64-bit system DSN for the SQLite database, created a Linked Server named NJ which points to it, and I can successfully query most tables, both via OPENQUERY and 4-part naming (after setting LevelZeroOnly for the MSADASQL provider). One table consistently throws out an error.

The table definition in SQLite:

CREATE TABLE LogMemo (lParent ,lLogId integer, lText default "");

Querying from within SQLite works.

sqlite> select lparent,llogid,lText from [LogMemo] order by lparent desc limit 4;
GCZZ2Q|834111942|Found it
GCZZ2Q|834111838|Tftc!
GCZZ2Q|833813811|On a quick girls getaway but first let me grab a cache. We pulled over by GZ, I didn't look for long before making the find. I signed the log and replaced the cache as found. TFTC
GCZZ2Q|833807936|Crossed the Delaware Bay on the  Cape May- Lewes Ferry (the New Jersey) with Lambmo, dukemom1, and  TBurket.  We had a wonderful trip,  found 19 new and interesting caches, and introduced TBurket to this great adventure. 
Ferry nice view was the first for the day, T's first find, and first NJ cache for Lambmo and dukemom.  Yes, it is a nice view of the ferry.

Querying this table via the Linked Server returns the following error:

Msg 7347, Level 16, State 1, Line 15
OLE DB provider ‘MSDASQL’ for linked server ‘nj’ returned data that does not match expected data length for column ‘[nj]…[logmemo].lText’. The (maximum) expected data length is 510, while the returned data length is 2582.

Thinking it was a problem with the long text field, I tried to give some hints about how much data should be expected coming back. I have tried the following queries:

select top 4  lparent,llogid,cast(ltext as nvarchar(4000)) as ltext from nj...logmemo order by lparent desc;
select top 4  lparent,llogid,substring(ltext,1,4000) as ltext from nj...logmemo order by lparent desc;
select top 4  lparent,llogid,substring(ltext,1,20) as ltext from nj...logmemo order by lparent desc;
select top 4  lparent,llogid,substring(ltext,1,200) as ltext from nj...logmemo order by lparent desc;
select top 4  lparent,llogid,ltext from nj...logmemo order by lparent desc;

All result in the same error. So I tried using OPENQUERY instead:

SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,cast(ltext as varchar(20)) as ltext from [LogMemo]    order by lparent desc limit 4')
SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,cast(ltext as varchar(4000)) as ltext from [LogMemo]  order by lparent desc limit 4')
SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,substr(ltext,1,8000) as lText from [LogMemo] order by lparent desc limit 4')
SELECT top 4 * FROM OPENQUERY([NJ], 'select lparent,llogid,substr(ltext,1,200) as lText from [LogMemo]  order by lparent desc limit 4')

The first three of these four queries return the first 3 of the expected 4 results, then the same error is thrown, with the exception that the reported returned data length is 728, not 2582. Note that the length of the long text associated with the last record in the original result set is 362 characters, which is 724 bytes (if we assume nvarchar).

The last query doesn’t throw an error, but I only get the first 200 characters of the value in lText.

So, the question becomes…how can I extract the full text from this field in SQLite so I can insert it into my SQL Server table?

  • Is there a limit to the size of data that can be returned for one field via this method/driver?
  • Is there another setting I’m missing somewhere, or an extra parameter for OPENQUERY?
  • Should I be looking at OPENROWSET instead?

I’m close to abandoning this angle entirely and just dumping the table data to CSV from SQLite and bulk-importing it into SQL Server.

Edit in response to one comment:

SELECT LEN(ltext) FROM nj...logmemo ORDER BY LEN(ltext) DESC;

Results in an error:

Msg 7347, Level 16, State 1, Line 24
OLE DB provider ‘MSDASQL’ for linked server ‘nj’ returned data that does not match expected data length for column ‘[nj]…[logmemo].lText’. The (maximum) expected data length is 510, while the returned data length is 2582.

Doing similar with OPENQUERY:

select * from OPENQUERY([NJ], 'select length(ltext) as lText from [LogMemo] order by length(ltext)')

Msg 7356, Level 16, State 1, Line 26
The OLE DB provider “MSDASQL” for linked server “NJ” supplied inconsistent metadata for a column. The column “lText” (compile-time ordinal 1) of object “select length(ltext) as lText from [LogMemo] order by length(ltext)” was reported to have a “DBTYPE” of 130 at compile time and 3 at run time.


Get this bounty!!!

#StackBounty: #sql #performance #sqlite #database-performance #sqlperformance SQLite "LIKE" operator is very slow compared to…

Bounty: 50

When I am using the LIKE operator in SQLite, it is very slow compared to when I use the = instead.
It takes about 14ms with the = operator, but when I use LIKE, it takes about 440ms. I am testing this with DB Browser for SQLite. Here is the query that works fast:

SELECT re.ENTRY_ID, 
       GROUP_CONCAT(re.READING_ELEMENT, '§') AS read_element,
       GROUP_CONCAT(re.FURIGANA_BOTTOM, '§') AS furigana_bottom,
       GROUP_CONCAT(re.FURIGANA_TOP, '§') AS furigana_top,
       GROUP_CONCAT(re.NO_KANJI, '§') AS no_kanji,
       GROUP_CONCAT(re.READING_COMMONNESS, '§') AS read_commonness, 
       GROUP_CONCAT(re.READING_RELATION, '§') AS read_rel,
       GROUP_CONCAT(se.SENSE_ID, '§') AS sense_id, 
       GROUP_CONCAT(se.GLOSS, '§') AS gloss, 
       GROUP_CONCAT(se.POS, '§') AS pos, 
       GROUP_CONCAT(se.FIELD, '§') AS field,
       GROUP_CONCAT(se.DIALECT, '§') AS dialect, 
       GROUP_CONCAT(se.INFORMATION, '§') AS info 
FROM Jmdict_Reading_Element AS re LEFT JOIN 
     Jmdict_Sense_Element AS
     se ON re.ENTRY_ID = se.ENTRY_ID
WHERE re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Reading_Element WHERE READING_ELEMENT = 'example') OR 
      re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Sense_Element WHERE GLOSS = 'example')
 GROUP BY re.ENTRY_ID

The slows down when I change

WHERE re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Reading_Element WHERE READING_ELEMENT = 'example') OR 
re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Sense_Element WHERE GLOSS = 'example')

to

WHERE re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Reading_Element WHERE READING_ELEMENT LIKE 'example') OR 
re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Sense_Element WHERE GLOSS LIKE 'example')

I need to do this so that I can use wildcards e.g.

WHERE re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Reading_Element WHERE READING_ELEMENT LIKE 'example%') OR 
re.ENTRY_ID IN (SELECT ENTRY_ID FROM Jmdict_Sense_Element WHERE GLOSS LIKE 'example%')

Here is a link to the database itself:
https://www.mediafire.com/file/hyuymc84022gzq7/dictionary.db/file

Thanks


Get this bounty!!!