#StackBounty: #oracle #trigger #oracle-11g #plsql #debugging Why is an Oracle Package Variable Intermittently Incorrect after Multiple …

Bounty: 200

I am supporting an application that runs on Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production.

We have a table that stores hourly data that uses triggers call a package to sync to a monthly table that stores a total amount.

When the hourly table is updated, the on update trigger saves the old amount to a table package variable in this format: primary key fields, amount.

TYPE HOURLY_CHANGE_TAB IS TABLE OF HOURLY_REC INDEX BY BINARY_INTEGER;

TYPE HOURLY_REC IS RECORD (
      KEY       NUMBER(10),
      SUB_KEY   NUMBER,
      MONTH     VARCHAR2(6),
      AMOUNT NUMBER);

Amount is set to -old_amount. It then adds the new amount to amount. It does this for each hour record that gets updated. So if there are 120 hours in a month, it would subtract the old amount 120 times from the variable, and add the new amount 120 times to get the total change. An after statement trigger or manual call would update the months record with this total and clear the package variable. When we are updating a large number of hours, we set a flag to disable the month update and then manually call the update method once at the end for performance. Regardless, the issue has already happened by the time this method gets called.

When I update a deal that runs for multiple years, the total amount is off by new amount – old amount in a random number of months. So if old amount was 100 per hour and new amount is 10 per hour then the amount we push to the month record is -90 what it should be. The majority of the months would be correct. I have seen zero, one, two, and four months broken after each update / rollback testing run. Each time I run the exact same update and then rollback after checking for the error.

I logged every time the package variable changes to a new table and saw the following:
LOGGED DATA

KEY and SUB_KEY are keys in this example. Month is the monthly record our hour is part of. The amount column shows the amount to be added to the package variable. Index is the index of the table variable that we are updating for this row. Starting amount it the value of the amount column at the start of logging the update. Calc Amount is what the package variable should store after we add the amount to the starting amount. Ending Amount is the value retrieved from the package variable after the update is saved. Row index is the incrementing unique sequence in the logging table to keep track of the order in which the updates ran. The correct amount is a window function summing the amount ordered by row_index, partitioned by the keys, to show what the package variable should store on every row.

Notice what happens between rows 2 and 3. The ending amount on 2 is the correct value of 540. The starting amount on 3 is 550, shifted by new volume(20) – old volume(10). This relationship held true for an update of 100 to 10, the shifted amount was -90, not 10. And yet there is no gap in the row index. We go straight from 16188 to 16189 without a gap. I created a copy of the package and removed everything but the method to update the package variable. There is no other path to update the table variable except via the method that is logged. The package variable is defined within the package body so no other package can update it, and I deleted all other methods in the package but the one to update it. I can’t fathom how this variable is changing between the end of one method and the start of the next.

CREATE OR REPLACE PACKAGE BODY TEST_PKG AS

   PKG_HOUR_COUNT  NUMBER := 0;
   HOUR_TAB            HOURLY_CHANGE_TAB ;
...

Again, the number of months affected by this issue is random, and they are all broken by the same amount, one month isn’t offsetting another by some off by one error.

Note that I have endeavored to reduce the following code to a minimum version of the code. Table names and columns have been changed. If I attempt to run the trigger code in a loop versus running an update the issue does not happen.

I was able to create a new version of the package, delete everything not related to these calls and reproduce the issue and was successful. I tried to create a mini copy of the table with just the rows I needed and may have been able to reproduce the issue once. I know that’s vague but it hasn’t worked since on the mini copied table.

SUB_KEY is not used in the package variable since it only defines the hours in this table so all hours with the same KEY and SUB_KEy will be in the same month, regardless of SUB_SUB_KEY.

Table Definition:

CREATE TABLE ORDER_HOURS
(KEY NUMBER(12,0) NOT NULL ENABLE, 
    SUB_KEY NUMBER(6,0) NOT NULL ENABLE, 
    SUB_SUB_KEY NUMBER(3,0) NOT NULL ENABLE, 
    GMT_TIME DATE NOT NULL, 
    DAY DATE NOT NULL ENABLE, 
    HOUR VARCHAR2(4 BYTE) NOT NULL ENABLE, 
    AMOUNT NUMBER(10,3) NOT NULL ENABLE, 
    CONFIRMED_FLAG VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,        
     CONSTRAINT HOUR_PK PRIMARY KEY (KEY, SUB_KEY, SUB_SUB_KEY, GMT_TIME)
  USING INDEX 
   ) PARTITION BY RANGE ("DAY") INTERVAL (NUMTOYMINTERVAL(3,'MONTH'))
   ( PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2007', 'DD-MM-YYYY')),
      PARTITION p1 VALUES LESS THAN (TO_DATE('1-1-2008', 'DD-MM-YYYY')),
      PARTITION p2 VALUES LESS THAN (TO_DATE('1-7-2009', 'DD-MM-YYYY')),
      PARTITION p3 VALUES LESS THAN (TO_DATE('1-1-2010', 'DD-MM-YYYY')) ); 

Trigger:

CREATE OR REPLACE TRIGGER UPDATE_HOUR BEFORE
  UPDATE OF AMOUNT
    ON ORDER_HOURS
    FOR EACH ROW    
    BEGIN 
        IF :NEW.CONFIRMED_FLAG = 'Y' AND :OLD.CONFIRMED_FLAG = 'Y' 
            THEN RAISE_APPLICATION_ERROR(-20914, 'You cannot Update.');
        ELSE
            IF :NEW.CONFIRMED_FLAG = 'A' THEN
                :NEW.CONFIRMED_FLAG := 'Y';
            END IF;
            IF TEST_PKG.GET_UPDATE_HOURS_FLAG = 'Y' THEN
                TEST_PKG.UPDATE_HOURS(:OLD.KEY, :NEW.KEY, :OLD.SUB_KEY, :NEW.SUB_KEY, :OLD.SUB_SUB_KEY, :NEW.SUB_SUB_KEY, :OLD.DAY, :NEW.DAY, :OLD.AMOUNT, :NEW.AMOUNT);
            END IF;
        END IF;
    END;
/

Logging Table:

CREATE TABLE HOURS_DATA(
KEY   number,
SUB_KEY   number,
MONTH   VARCHAR2(4000),
AMOUNT NUMBER,
VARIABLE_INDEX NUMBER,
STARTING_AMOUNT NUMBER,
CALC_AMOUNT NUMBER,
ENDING_AMOUNT NUMBER,
ROW_INDEX NUMBER,
CREATE_DATE DATE
);

Trimed down package, only portion remaining that touches HOUR_TAB:

CREATE OR REPLACE PACKAGE TEST_PKG AS

UPDATE_HOURS_FLAG VARCHAR2(1) DEFAULT 'Y';

FUNCTION GET_UPDATE_HOURS_FLAG RETURN VARCHAR2;

PROCEDURE UPDATE_HOURS(
   OLD_KEY NUMBER,
   NEW_KEY NUMBER,
   OLD_SUB_KEY NUMBER,
   NEW_SUB_KEY NUMBER,
   OLD_SUB_SUB_KEY NUMBER,
   NEW_SUB_SUB_KEY NUMBER,
   OLD_DAY DATE,
   NEW_DAY DATE,
   OLD_AMOUNT NUMBER,
   NEW_AMOUNT NUMBER);

 PROCEDURE UPDATE_HOUR_TAB (
   KEY  NUMBER,
   SUB_KEY NUMBER,
   MONTH VARCHAR2,
   AMOUNT NUMBER);

PROCEDURE FLUSH_HOUR_TAB;

TYPE HOURLY_REC IS RECORD (
      KEY       NUMBER(10),
      SUB_KEY   NUMBER,
      MONTH     VARCHAR2(6),
      AMOUNT NUMBER);   

TYPE HOURLY_CHANGE_TAB IS TABLE OF HOURLY_REC INDEX BY BINARY_INTEGER;      

END TEST_PKG;
/

CREATE OR REPLACE PACKAGE BODY TEST_PKG AS

   PKG_HOUR_COUNT  NUMBER := 0;
   HOUR_TAB            HOURLY_CHANGE_TAB ;

FUNCTION GET_UPDATE_HOURS_FLAG RETURN VARCHAR2 IS

BEGIN
   RETURN UPDATE_HOURS_FLAG;
END GET_UPDATE_HOURS_FLAG;

PROCEDURE UPDATE_HOUR_TAB (
   KEY  NUMBER,
   SUB_KEY NUMBER,
   MONTH VARCHAR2,
   AMOUNT NUMBER) IS

   CNT NUMBER;
   STARTING_AMOUNT number := 0;
   CALC_AMOUNT number := 0;

BEGIN

   CNT := HOUR_TAB.FIRST;

   WHILE CNT IS NOT NULL LOOP
      EXIT WHEN HOUR_TAB(CNT).KEY = KEY AND HOUR_TAB(CNT).SUB_KEY = SUB_KEY AND HOUR_TAB(CNT).MONTH = MONTH;
      CNT := HOUR_TAB.NEXT(CNT);
   END LOOP;

   IF CNT IS NULL THEN
      PKG_HOUR_COUNT := PKG_HOUR_COUNT + 1;
      HOUR_TAB(PKG_HOUR_COUNT).KEY := KEY;
      HOUR_TAB(PKG_HOUR_COUNT).SUB_KEY := SUB_KEY;
      HOUR_TAB(PKG_HOUR_COUNT).MONTH := MONTH;
      HOUR_TAB(PKG_HOUR_COUNT).AMOUNT := AMOUNT;
   ELSE
      STARTING_AMOUNT := HOUR_TAB(CNT).AMOUNT;
      CALC_AMOUNT := HOUR_TAB(CNT).AMOUNT + AMOUNT;
      HOUR_TAB(CNT).AMOUNT := HOUR_TAB(CNT).AMOUNT + AMOUNT;      
   END IF;

   IF CNT IS NULL THEN
    CNT := PKG_HOUR_COUNT;
    END IF;
   INSERT INTO HOURS_DATA
   VALUES(KEY,
   SUB_KEY,
   MONTH,
   AMOUNT,
   CNT,
   STARTING_AMOUNT,
   CALC_AMOUNT,
   HOUR_TAB(CNT).AMOUNT,
   (SELECT COUNT(*) FROM HOURS_DATA),
   SYSDATE);

END UPDATE_HOUR_TAB;

PROCEDURE UPDATE_HOURS(
   OLD_KEY NUMBER,
   NEW_KEY NUMBER,
   OLD_SUB_KEY NUMBER,
   NEW_SUB_KEY NUMBER,
   OLD_SUB_SUB_KEY NUMBER,
   NEW_SUB_SUB_KEY NUMBER,
   OLD_DAY DATE,
   NEW_DAY DATE,
   OLD_AMOUNT NUMBER,
   NEW_AMOUNT NUMBER) IS

BEGIN

      UPDATE_HOUR_TAB (
         OLD_KEY,
         OLD_SUB_KEY,
         TO_CHAR(OLD_DAY,'YYYYMM'),
         -OLD_AMOUNT);      

      UPDATE_HOUR_TAB (
         NEW_KEY,
         NEW_SUB_KEY,
         TO_CHAR(NEW_DAY,'YYYYMM'),
         NEW_AMOUNT); 

END UPDATE_HOURS;

PROCEDURE FLUSH_HOUR_TAB IS
   CNT NUMBER;

BEGIN

   CNT := HOUR_TAB.FIRST;

   WHILE CNT IS NOT NULL LOOP   
      CNT := HOUR_TAB.NEXT(CNT);
      --UPDATE MONTHS, DATA ALREADY BROKEN AT THIS POINT
   END LOOP;

   HOUR_TAB.DELETE;
   PKG_HOUR_COUNT := 0;

END FLUSH_HOUR_TAB;

END TEST_PKG;
/

Insert Script for test data:

DECLARE
    ORDER_KEY NUMBER := 10;
    SUB_KEY NUMBER := 0;
    SUB_SUB_KEY NUMBER := 1;
    V_START_TIME TIMESTAMP WITH TIME ZONE := '01-FEB-19 09.00.00 AM UTC';
    V_END_TIME TIMESTAMP WITH TIME ZONE := '31-DEC-22 08.00.00 AM UTC';
    V_CURRENT_TIME TIMESTAMP WITH TIME ZONE;
    V_DAY DATE;
    V_HOUR NUMBER(2);
BEGIN    
    V_CURRENT_TIME := V_START_TIME;
    WHILE V_CURRENT_TIME <= V_END_TIME LOOP
    --DBMS_OUTPUT.PUT_LINE(V_CURRENT_TIME);

    V_DAY := CAST(V_CURRENT_TIME
                AT TIME ZONE ('US/PACIFIC') AS DATE);                
    V_HOUR := TO_CHAR(V_DAY, 'HH24') + 1;

    --EXCLUDE WEEKENDS                
    IF MOD(TO_CHAR(V_DAY, 'J'), 7) + 1 NOT IN(6, 7) THEN
        --DBMS_OUTPUT.PUT_LINE(V_DAY);
        INSERT INTO ORDER_HOURS
        VALUES(ORDER_KEY, SUB_KEY, SUB_SUB_KEY, V_CURRENT_TIME, TRUNC(V_DAY), V_HOUR, 10, 'N');
    END IF;

    V_CURRENT_TIME := V_CURRENT_TIME + INTERVAL '1' HOUR;

END LOOP;    
    COMMIT;
END;
/

Update I ran – only causes error on original table:

DECLARE
    P_AMOUNT NUMBER := 20;
    P_KEY NUMBER := 10;
BEGIN

FOR UPDATE_HOUR IN(SELECT * FROM ORDER_HOURS
                    WHERE KEY = P_KEY                       
                    AND DAY >= '1-MAR-20'
                    ) LOOP                           
                                    UPDATE ORDER_HOURS
                                    SET AMOUNT = P_AMOUNT
                                    WHERE KEY = P_KEY
                                        AND DAY = UPDATE_HOUR.DAY
                                        AND HOUR = UPDATE_HOUR.HOUR
                                        AND (AMOUNT <> P_AMOUNT);
--Running the trigger code directly has never worked
--            IF :NEW.CONFIRMED_FLAG = 'Y' AND :OLD.CONFIRMED_FLAG = 'Y' 
--                THEN RAISE_APPLICATION_ERROR(-20914, 'You cannot Update.');
--            ELSE
--                IF :NEW.CONFIRMED_FLAG = 'A' THEN
--                    :NEW.CONFIRMED_FLAG := 'Y';
--                END IF;
--                IF TEST_PKG.GET_UPDATE_HOURS_FLAG = 'Y' THEN
--                    TEST_PKG.UPDATE_HOURS(:OLD.KEY, :NEW.KEY, :OLD.SUB_KEY, :NEW.SUB_KEY, :OLD.SUB_SUB_KEY, :NEW.SUB_SUB_KEY, :OLD.DAY, :NEW.DAY, :OLD.AMOUNT, :NEW.AMOUNT);
--                END IF;
--            END IF;                                                        
    END LOOP;
    --DOES THE MONTHLY UPDATE BASED ON PACKAGE VARIABLE AND CLEARS THE VARIABLE
    TEST_PKG.FLUSH_HOUR_TAB;
END;
/

Code to check logs for errors:

SELECT HD.*
FROM (
SELECT HD.*, SUM(AMOUNT) OVER(PARTITION BY KEY, SUB_KEY, MONTH ORDER BY ROW_INDEX) CORRECT_AMOUNT, COUNT(*) OVER(PARTITION BY SUB_KEY) SEQ_COUNT
FROM HOURS_DATA HD
) HD
WHERE HD.ENDING_AMOUNT <> CORRECT_AMOUNT
ORDER BY ROW_INDEX;

I have checked to see if there are more rows than there should be with an the same index or sub_key but the number of sub_key rows and index_rows is always the same. Right now my only hypothesis is a bug in Oracle. I am happy to provide more data or code if needed, but since I have not been able to duplicate this on another table, I am not sure how to help someone else reproduce it themselves.

The Order Hours table has 294,574,145 rows, filling 138 partitions, and using 7045 MB for data. When I run the attached code against the mini table ORDER_HOUR, it does not error out. When I only change the table name to the actual table with all the data, I do get the error. This makes me suspect that this is a bug with Oracle around triggers on large partitioned tables since the only difference is the table the trigger runs against. This makes it more difficult for someone to test against and try to reproduce if the issue is table specific. Any suggestions around cause or additional traces to run is appreciated.

While I may be able to solve this by changing how we update the monthly table, or update the month table after every update to an hour despite the performance cost(there are triggers on the month table that do a monthly cost calculation so each update is expensive), this issue is strange enough that I would like to solve it if possible.

Thank you for your help and patience reading through this wall of text.


Get this bounty!!!

#StackBounty: #oracle #oracle-12c Oracle 12c query performance varies based on schema/user

Bounty: 50

I have a strange (to me) performance issue. Based on the schema that runs the query I received very different performance. Here’s my setup.

Oracle 12c DB 
    SAMPLE schema/user
       WIDGET table (300,000 rows/10 columns of varchars, nvarchars, dates, and numbers)
    USER1 schema/user
    USER2 schema/user
    DBA1 schema/user
--SELECT granted on WIDGET table to USER1 and USER2. 
--DBA1 already has access to all table accross all schemas and has many more privs. 

When I execute either of the following:

select * from SAMPLE.WIDGET;
select count(1) from SAMPLE.WIDGET;

It takes about 0.5 seconds in SAMPLE schema or DBA1 schema. It takes about 10 seconds in USER1 and USER2 schema.

So I’m looking for the following:

Under what condition(s) can query performance in Oracle differ based on the user you are logged in to when you execute a query?


Get this bounty!!!

#StackBounty: #sql #oracle #oracle11g #common-table-expression #ms-query Oracle CTE failing in one computer

Bounty: 100

I have queries created in Microsoft Query to run in Excel with VBA.

They work in different computers but there’s one computer where it doesn’t work.

In that computer the queries still work except the ones that use CTEs.

A normal query like the following works:

SELECT
  TBL.COL
FROM
  DB.TBL TBL;

But when it has a subquery (CTE) like the following:

WITH
  SUBQUERY AS (
    SELECT
      TBL.COL
    FROM
      DB.TBL TBL
  )
SELECT
  SUBQUERY.COL
FROM
  SUBQUERY;

It runs but doesn’t retrieve any data.

It doesn’t even show the column name like it would if it worked but had 0 records returned.

The query shows the warning message:

SQL Query can’t be represented graphically. Continue anyway?

Which is normal and shows in any computer, but it also shows another warning message after:

SQL statement executed successfully.

Which only appears in that computer when it doesn’t work.

I need to be able to use them for the queries that I have made.

Using temporary tables would maybe work but I don’t have the permissions required to try.

I tried using inline views but they duplicate the data.


Get this bounty!!!

#StackBounty: #.net #oracle #entity-framework #ado.net Entity Framework 6 – StoreGeneratedPattern value not staying

Bounty: 50

We are using Entity Framework 6.2.0 against an Oracle 12c database, and have been following the Database-First approach.

A lot of out tables have an _ID column that is generated by the database upon INSERT. This is down by using a default value on the DB server, TABLE_SEQUENCE.NEXTVAL.

After generating the DB model in Entity Framework, I went through and marked each _ID column as StoreGeneratedPattern="Identity" and we were all set to go. However, when updating the DB model after some DB changes, we noticed an issue. One table in particular forgets this setting and resets this back to None. All other tables remember StoreGeneratedPattern="Identity".

This consistently happens each time we update the DB model from the database.
Visual Studio will still say that PROBLEMATIC_TABLE_ID has Identity set, but the .edmx file does not.
My process has been to update the model, then open the .edmx file and manually paste in StoreGeneratedPattern="Identity" for that one column.

I have compared this table against others in the DB, in the .edmx, in the codebase, and I cannot find anything that would cause this one column on this one table to behave differently upon DB model update.

TL;DR: Upon updating our local DB model from the database, the value for StoreGeneratedPattern on one column/table is always reset, while all others stay the same. VS still says it is Identity, but the XML in the .edmx file disagrees and needs to be updated manually.


Get this bounty!!!

#StackBounty: #sql #oracle #recursion #plsql #hierarchical-data Oracle SQL/PLSQL: Hierarchical recursive query with repeating data

Bounty: 50

I have a recursive function below that works very well but I have now found that some of the data is not unique and I need a way to handle it.

  function calc_cost(
    model_no_ number, 
    revision_ number, 
    sequence_no_ in number, 
    currency_ in varchar2
  ) return number is
    qty_ number := 0;
    cost_ number := 0;
  begin

    select nvl(new_qty, qty), purch_cost 
      into qty_, cost_ 
    from prod_conf_cost_struct_clv
    where model_no = model_no_
      and revision = revision_
      and sequence_no = sequence_no_
      and (purch_curr = currency_ or purch_curr is null);

    if cost_ is null then 
      select sum(calc_cost(model_no, revision, sequence_no, purch_curr)) into cost_ 
      from prod_conf_cost_struct_clv 
      where model_no = model_no_
        and revision = revision_
        and (purch_curr = currency_ or purch_curr is null)
        and part_no in (
          select component_part
          from prod_conf_cost_struct_clv
          where model_no = model_no_
            and revision = revision_
            and sequence_no = sequence_no_);
    end if;
    return qty_ * cost_;
  exception when no_data_found then 
    return 0;
  end calc_cost;`

The following criterion is where this function is failing ...part_no in (select component_part....

Sample data:

rownum., model_no, revision, sequence_no, part_no, component_part, level, cost

 1. 62, 1, 00, XXX, ABC, 1, null
 2. 62, 1, 10, ABC, 123, 2, null
 3. 62, 1, 20, 123, DEF, 3, null
 4. 62, 1, 30, DEF, 456, 4, 100
 5. 62, 1, 40, DEF, 789, 4, 50
 6. 62, 1, 50, DEF, 024, 4, 20
 7. 62, 1, 60, ABC, 356, 2, null
 8. 62, 1, 70, 356, DEF, 3, null
 9. 62, 1, 80, DEF, 456, 4, 100
 10. 62, 1, 90, DEF, 789, 4, 50
 11. 62, 1, 100, DEF, 024, 4, 20

If I was to pass the following values into the function parameters: model_no, revision, sequence_no (ignore currency as it is not relevant to the issue):

62, 1, 20

I want it to summarize rows 4-6 ONLY = 170, however it is summarizing rows 4-6 AND 9-11 = 340.

Ultimately this function will be used in the SQL query below:

select level, sys_connect_by_path(sequence_no, '->') path, 
     calc_cost(model_no, revision, sequence_no, 'GBP') total_gbp
from prod_conf_cost_struct_clv
where model_no = 62
  and revision = 1
connect by prior component_part = part_no
  and prior model_no = 62
  and prior revision = 1
start with sequence_no = 20
order by sequence_no

As you can see this would also introduce the issue of component_part = part_no.

Any assistance would be most appreciated.

Thanks in advance.


Get this bounty!!!

#StackBounty: #oracle #oracle-12c #installation Oracle SQLPLUS: ORA-12547: TNS:lost contact

Bounty: 100

This is a total rewrite of the previous post, where people suggested to start from scratch protocoling my actions.

  1. I used How to install Oracle 18c (Enterprise Edition) on Ubuntu 18.04?
    4-step guide + 1-step for troubleshooting.

  2. My machine is Ubuntu 18.04 (amd64) and distributive 18.3 Linux x86-64 downloaded from here. The distributive is run via ./runInstaller. On top of that I Inserted example database LINUX.X64_180000_examples.zip.

  3. Environmental variables:

    #ORACLE 
    export ORACLE_BASE=/oracle18c/app/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/18.0.0/dbhome_1
    export PATH=/usr/sbin:/usr/local/bin:$PATH
    export PATH=$ORACLE_HOME/bin:$PATH
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib
    export ORACLE_LIBPATH=$ORACLE_HOME/lib
    export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$CLASSPATH
    export ORACLE_HOSTNAME=$HOSTNAME
    export ORA_INVENTORY=/oracle18c/app/oraInventory
    export DATA_DIR=$ORACLE_BASE/oradata
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export ADR_HOME=$ORACLE_BASE/diag
    #--------------------------------------------------------------------
    export ORACLE_SID=orcl
    export ORACLE_UNQNAME=orcl
    export PDB_NAME=pdb
    export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    #--------------------------------------------------------------------
    export TMP=/tmp; export TMPDIR=$TMP; export TEMP=$TMP
    
  4. Listener status:
    oracle@sergey-Bionic:~$ lsnrctl status
    
    LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 25-APR-2019 17:35:43
    
    Copyright (c) 1991, 2018, Oracle.  All rights reserved.
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sergey-Bionic)(PORT=1521)))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER
    Version                   TNSLSNR for Linux: Version 18.0.0.0.0 - Production
    Start Date                25-APR-2019 16:50:53
    Uptime                    0 days 0 hr. 44 min. 49 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /oracle18c/app/oracle/product/18.0.0/dbhome_1/network/admin/listener.ora
    Listener Log File         /oracle18c/app/oracle/diag/tnslsnr/sergey-Bionic/listener/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sergey-Bionic)(PORT=1521)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
    The listener supports no services
    The command completed successfully
    oracle@sergey-Bionic:~$ ss -elpunt | grep -E "^Net|tnslsnr"
    Netid  State    Recv-Q   Send-Q      Local Address:Port      Peer Address:Port                                                                                  
    tcp    LISTEN   0        128                     *:1521                 *:*      users:(("tnslsnr",pid=3394,fd=8)) uid:54321 ino:39696 sk:12 v6only:0 <->       
    oracle@sergey-Bionic:~$ 
    
  5. The error is still the same:
    oracle@sergey-Bionic:/$ strace -f -o /tmp/strace.log $ORACLE_HOME/bin/sqlplus /nolog
    
    SQL*Plus: Release 18.0.0.0.0 - Production on Thu Apr 25 18:02:09 2019
    Version 18.3.0.0.0
    
    Copyright (c) 1982, 2018, Oracle.  All rights reserved.
    
    SQL> connect sys/oracle18c as sysdba;
    ERROR:
    ORA-12547: TNS:lost contact
    
    
    SQL> 
    

Solution suggested here Troubleshoot ORA-12547: TNS:lost contact error. does not help. What is interesting the value of ulimit -c is not persistent between reboots.

Any help, namely accessing a tutorial database, Oracle Database 18c Examples (18.3) for Linux x86-64, (sha256sum – 405ca9e9341a11f3468b792308c03781fc1988b4278395c1954984f2224a4d32), is highly appreciated!


Get this bounty!!!

#StackBounty: #sql #oracle #split Split text into multiple lines based on pipe and cap delimiter – Oracle PL/SQL Pipelined Function

Bounty: 50

I have a table:

 CREATE TABLE "text_file"
( "SEQ" NUMBER,
"SPLIT_VALUE" CLOB
)

The content of the table is:

SEQ       SPLIT_VALUE
1         MSH|^~&|GHH LAB|ELAB-3|GHH OE|BLDG4|200202150930||ORU^R01
          PID|||555-44-4444||EVERYWOMAN^EVE^E^^^^L|JONES|19620320|F|||153 FERNWOOD DR.^^STATESVILLE^OH^35292|
          OBR|1|845439^GHH OE|1045813^GHH LAB|15545^GLUCOSE|||200202150730
          OBX|1|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^182|mg/dl|70_105
          OBX|2|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^172|mg/dl|70_105

2         MSH|^~&|GHH LAB|ELAB-3|GHH OE|BLDG4|200202150930||ORU^R01
          PID|||555-44-4444||EVERYWOMAN^EVE^E^^^^L|JONES|19620320|F|||153 FERNWOOD DR.^^STATESVILLE^OH^35292|
          OBR|1|845439^GHH OE|1045813^GHH LAB|15545^GLUCOSE|||200202150730
          OBX|1|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^182|mg/dl|70_105
          OBX|2|SN|1554-5^GLUCOSE^POST 12H CFST:MCNC:PT:SER/PLAS:QN||^172|mg/dl|70_105

Please note – the possible segment like MSH, OBR, OBX, LX can be 3 character or 2 characters. So, best way would be to get the segment name before the first pipe.

I am looking to split the string in split_value into multiple rows in the following conditions:

  • SEQ — it would pick from the first column
  • SPLIT_SEQ — it would split based on the first word before |, for ex. MSH, OBR, OBX, LX followed by sequence starting from 00. If there is a cap ^, then it would break down even further, for ex. MSH08-01, MSH08-02

Please note – there is an exception for segment MSH. For MSH – first
element is | and second one is ^~&

SEQ SPLIT_SEQ   SEG_SEQ SPLIT_SEQ_VALUE
1   MSH00       1       MSH
1   MSH01       1       |
1   MSH02       1       ^~&
1   MSH03       1       GHH LAB
1   MSH04       1       ELAB-3
  • SEG_SEQ — if the segment, the first word before | is repeated in the same SEQ, then increase it. So, if OBX is twice, then first OBX values would be 1 and for second OBX, it would be 2 and so on
  • SPLIT_SEQ_VALUE — The value from the message above would be specified here.

Please note – I have around 90,000 rows in text_file table. So the solution should be able to process 90,000 efficiently.

The complete output is:

SEQ SPLIT_SEQ   SEG_SEQ SPLIT_SEQ_VALUE
1   MSH00       1       MSH
1   MSH01       1       |
1   MSH02       1       ^~&
1   MSH03       1       GHH LAB
1   MSH04       1       ELAB-3
1   MSH05       1       GHH OE
1   MSH06       1       BLDG4
1   MSH07       1       200202150930
1   MSH08       1       
1   MSH09-01    1       ORU
1   MSH09-02    1       R01
1   PID00       1       PID
1   PID01       1       
1   PID02       1       
1   PID03       1       555-44-4444
1   PID04       1       
1   PID05-01    1       EVERYWOMAN
1   PID05-02    1       EVE
1   PID05-03    1       E
1   PID05-04    1   
1   PID05-05    1   
1   PID05-06    1   
1   PID05-07    1       L
1   PID06       1       JONES
1   PID07       1       19620320
1   PID08       1       F
1   PID09       1       
1   PID10       1       
1   PID11-01    1       153 FERNWOOD DR.
1   PID11-02    1   
1   PID11-03    1       STATESVILLE
1   PID11-04    1       OH
1   PID11-05    1       35292
1   PID12       1   
1   OBR00       1       OBR
1   OBR01       1       1
1   OBR02-01    1       845439
1   OBR02-02    1       GHH OE
1   OBR03-01    1       1045813
1   OBR03-02    1       GHH LAB
1   OBR04-01    1       15545
1   OBR04-02    1       GLUCOSE
1   OBR05       1   
1   OBR06       1   
1   OBR07       1       200202150730
1   OBX00       1       OBX
1   OBX01       1       1
1   OBX02       1       SN
1   OBX03-01    1       1554-5
1   OBX03-02    1       GLUCOSE
1   OBX03-03    1       POST 12H CFST:MCNC:PT:SER/PLAS:QN
1   OBX04       1       
1   OBX05-01    1       
1   OBX05-02    1       182
1   OBX06       1       mg/dl
1   OBX07       1       70_105
1   OBX00       2       OBX
1   OBX01       2       1
1   OBX02       2       SN
1   OBX03-01    2       1554-5
1   OBX03-02    2       GLUCOSE
1   OBX03-03    2       POST 12H CFST:MCNC:PT:SER/PLAS:QN
1   OBX04       2           
1   OBX05-01    2       
1   OBX05-02    2       182
1   OBX06       2       mg/dl
1   OBX07       2       70_105

2   MSH00       1       MSH
2   MSH01       1       |
2   MSH02       1       ^~&
2   MSH03       1       GHH LAB
2   MSH04       1       ELAB-3
2   MSH05       1       GHH OE
2   MSH06       1       BLDG4
2   MSH07       1       200202150930
2   MSH08       1       
2   MSH09-01    1       ORU
2   MSH09-02    1       R01
2   PID00       1       PID
2   PID01       1       
2   PID02       1       
2   PID03       1       555-44-4444
2   PID04       1       
2   PID05-01    1       EVERYWOMAN
2   PID05-02    1       EVE
2   PID05-03    1       E
2   PID05-04    1   
2   PID05-05    1   
2   PID05-06    1   
2   PID05-07    1       L
2   PID06       1       JONES
2   PID07       1       19620320
2   PID08       1       F
2   PID09       1       
2   PID10       1       
2   PID11-01    1       153 FERNWOOD DR.
2   PID11-02    1   
2   PID11-03    1       STATESVILLE
2   PID11-04    1       OH
2   PID11-05    1       35292
2   PID12       1   
2   OBR00       1       OBR
2   OBR01       1       1
2   OBR02-01    1       845439
2   OBR02-02    1       GHH OE
2   OBR03-01    1       1045813
2   OBR03-02    1       GHH LAB
2   OBR04-01    1       15545
2   OBR04-02    1       GLUCOSE
2   OBR05       1   
2   OBR06       1   
2   OBR07       1       200202150730
2   OBX00       1       OBX
2   OBX01       1       1
2   OBX02       1       SN
2   OBX03-01    1       1554-5
2   OBX03-02    1       GLUCOSE
2   OBX03-03    1       POST 12H CFST:MCNC:PT:SER/PLAS:QN
2   OBX04       1       
2   OBX05-01    1       
2   OBX05-02    1       182
2   OBX06       1       mg/dl
2   OBX07       1       70_105
2   OBX00       2       OBX
2   OBX01       2       1
2   OBX02       2       SN
2   OBX03-01    2       1554-5
2   OBX03-02    2       GLUCOSE
2   OBX03-03    2       POST 12H CFST:MCNC:PT:SER/PLAS:QN
2   OBX04       2           
2   OBX05-01    2       
2   OBX05-02    2       182
2   OBX06       2       mg/dl
2   OBX07       2       70_105

I believe that in as plsql pipelined function would be the best way.

Any help would be appreciated.


Get this bounty!!!

#StackBounty: #sql-server #oracle #reporting-services #ssrs-2008-r2 #ssrs-2017 Oracle Date format exception in SQL Server Reporting Ser…

Bounty: 100

Earlier my client was using SSRS 2008R2 with Oracle as transaction database. Recently they have upgraded to SSRS 2017 and now many reports are throwing following error:

ERROR: Throwing
Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException:
[AbnormalTermination:ReportProcessing],
Microsoft.ReportingServices.ReportProcessing.ProcessingAbortedException:
An error has occurred during report processing. —>
Microsoft.ReportingServices.ReportProcessing.ReportProcessingException:
Query execution failed for dataset ‘Ds_Main’. —>
Oracle.ManagedDataAccess.Client.OracleException: ORA-01830: date
format picture ends before converting entire input string

After closely looking into report query, I have noticed that this error is for all those reports where oracle function TO_DATE(<Date Value>) has been used without date format. For example:

To_date(:Date_Parameter) -> this syntax throws above mentioned error
To_Date(:Date_Parameter,’MM/DD/YYYY’) -> this syntax works perfectly

Any suggestion to fix this issue without updating bunch of reports, please


Get this bounty!!!

Convert Comma separated String to Rows in Oracle SQL

Many times we need to convert a comma separated list of terms in a single string and convert it rows in SQL query.

for example

 India, USA, Russia, Malaysia, Mexico

Needs to be converted to:

 Country
 India
 USA
 Russia
 Malaysia
 Mexico

The following SQL script can help in this:

just replace the required values with your string and your delimiter.

Apache Commons DbUtils Mini Wrapper

This is a very small DB Connector code in Java as a wrapper class to Apache DBUtils.

The Commons DbUtils library is a small set of classes designed to make working with JDBC easier. JDBC resource cleanup code is mundane, error prone work so these classes abstract out all of the cleanup tasks from your code leaving you with what you really wanted to do with JDBC in the first place: query and update data.

Some of the advantages of using DbUtils are:

  • No possibility for resource leaks. Correct JDBC coding isn’t difficult but it is time-consuming and tedious. This often leads to connection leaks that may be difficult to track down.
  • Cleaner, clearer persistence code. The amount of code needed to persist data in a database is drastically reduced. The remaining code clearly expresses your intention without being cluttered with resource cleanup.
  • Automatically populate Java Bean properties from Result Sets. You don’t need to manually copy column values into bean instances by calling setter methods. Each row of the Result Set can be represented by one fully populated bean instance.

DbUtils is designed to be:

  • Small – you should be able to understand the whole package in a short amount of time.
  • Transparent – DbUtils doesn’t do any magic behind the scenes. You give it a query, it executes it and cleans up for you.
  • Fast – You don’t need to create a million temporary objects to work with DbUtils.

DbUtils is not:

  • An Object/Relational bridge – there are plenty of good O/R tools already. DbUtils is for developers looking to use JDBC without all the mundane pieces.
  • A Data Access Object (DAO) framework – DbUtils can be used to build a DAO framework though.
  • An object oriented abstraction of general database objects like a Table, Column, or Primary Key.
  • A heavyweight framework of any kind – the goal here is to be a straightforward and easy to use JDBC helper library.

Wrapper: