#StackBounty: #java #sql #json #oracle #spring-boot I am new to springboot and need to insert json data in oracle table and avoid dupli…

Bounty: 50

I am able to insert the json data in oracle table using intermediate table from springboot

I have a table abc-

ID  first_name  last_name  cust_ID  Active_Ind last_upd_dt
1   abc         pqr          101     Y          01-Apr-2021
2   aaa         bbb          102     Y          05-Feb-2021

I need to make sure-
If the new json data has the above existing value, do not update table abc, keep it as is and if it has new record only then insert. And if the oracle table record is not present in new json data, then change the ACTIVE_IND to ‘N’

I tried the below query to insert value where not exists from intermediate table ‘test’:

insert into abc
(ID, 
first_name, 
last_name, 
cust_ID, 
active_ind, 
last_upd_dt)
select 
abc_seq.nextval,
first_name, 
last_name, 
cust_ID, 
active_ind, 
last_upd_dt 
from test t
where not exists(
select null from abc a
where a.fist_name = t.first_name
and a.cust_ID = t.cust_ID);

This works fine in Oracle developer, but when I try the below query in springboot, it somehow inserts duplicates, not sure why it is happening, I have used prepared statement for the index.

insert into abc
(ID, 
first_name, 
last_name, 
cust_ID, 
active_ind, 
last_upd_dt)
select 
abc_seq.nextval,
?, 
?, 
?, 
?, 
?
from test t
where not exists(
select null from abc a
where a.fist_name = t.first_name
and a.cust_ID = t.cust_ID);```

I have tried merge queries as well, but none of them worked for me.


Get this bounty!!!

#StackBounty: #java #sql #oracle #spring-data-jpa #jdbctemplate Slow Performance with NamedParameterJdbcTemplate

Bounty: 50

I have a Spring5 application using NamedParameterJdbcTemplate to fetch data from OracleDB.

When I execute the same query from intellij DB console I get the result in 1.5 seconds but when I execute the same query from the Java application using
jdbcTemplate.queryForList(query, params); it takes 90 seconds

Here is the DBconfigration class:

@Configuration
@EnableJpaRepositories("com.xxxx.xxx.relational.repositories")
@PropertySource(value = "classpath:jdbc.properties")
@ComponentScan
@EnableTransactionManagement
@Import(RestTemplateConfig.class)
public class RelationalConfig {

    @Bean(destroyMethod = "close")
    public BasicDataSource dataSource(@Value("${jdbc.driverClassName}") String driverClassName,
            @Value("${jdbc.url}") String url, @Value("${jdbc.username}") String username,
            @Value("${jdbc.password}") String password) {
        BasicDataSource ds = new BasicDataSource();
        ds.setDriverClassName(driverClassName);
        ds.setUrl(url);
        ds.setUsername(username);
        ds.setPassword(password);
        return ds;
    }

    @Bean
    HibernateJpaVendorAdapter hibernateJpaVendorAdapter() {
        return new HibernateJpaVendorAdapter();
    }

    @Bean
    @Autowired
    LocalContainerEntityManagerFactoryBean entityManagerFactory(BasicDataSource dataSource,
            HibernateJpaVendorAdapter vendor, @Value("${hibernate.dialect}") String dialect,
            @Value("${jdbc.schema}") String schema) {
        LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
        entityManagerFactoryBean.setDataSource(dataSource);
        entityManagerFactoryBean.setJpaVendorAdapter(vendor);
        Properties jpaProperties = new Properties();

        jpaProperties.put("hibernate.dialect", dialect);
        jpaProperties.put("hibernate.default_schema", schema);
//        jpaProperties.put("hibernate.jdbc.fetch_size", 200);
//        jpaProperties.put("hibernate.connection.pool_size", 10);
        entityManagerFactoryBean.setJpaProperties(jpaProperties);
        entityManagerFactoryBean.setPackagesToScan("com.xxx.xxx.relational.entities");
        return entityManagerFactoryBean;
    }


    @Bean
    @Autowired
    JpaTransactionManager transactionManager(EntityManagerFactory factory) {
        return new JpaTransactionManager(factory);
    }


    @Bean
    @Autowired
    NamedParameterJdbcTemplate jdbcTemplate(DataSource dataSource) {
        JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
        jdbcTemplate.setFetchSize(200);
        return new NamedParameterJdbcTemplate(jdbcTemplate);
    }


    @Bean
    @Autowired
    SiblingsRepo<LegDTO> legSiblingsRepo(LegRepository legRepo) {
        return new SiblingsRepo<LegDTO>() {
            @Override
            public Optional<LegDTO> byNext(LegDTO next) {
                return legRepo.findByNextLegId(next.getId());
            }

            @Override
            public Optional<LegDTO> byId(Integer id) {
                return legRepo.findById(id);
            }

            @Override
            public LegDTO save(LegDTO sibling) {
                return legRepo.save(sibling);
            }
        };
    }


    @Bean
    @Autowired
    SiblingsRepo<JourneyDTO> journeySiblingsRepo(JourneyRepository journeyRepo) {
        return new SiblingsRepo<JourneyDTO>() {
            @Override
            public Optional<JourneyDTO> byNext(JourneyDTO next) {
                return journeyRepo.findByNextJourneyId(next.getId());
            }

            @Override
            public Optional<JourneyDTO> byId(Integer id) {
                return journeyRepo.findById(id);
            }

            @Override
            public JourneyDTO save(JourneyDTO sibling) {
                return journeyRepo.save(sibling);
            }
        };
    }

    @Bean
    public static PropertySourcesPlaceholderConfigurer propertyConfig() {
        return new PropertySourcesPlaceholderConfigurer();
    }
}

Here is the query:

String q = "select journeyid, extid, extccuid
        from ACTIVE_JOURNEY_VIEW
        WHERE coalesce(ata, nextJourneyEtd, sysdate + 1) > sysdate
          AND ((organizationid = :orgId AND deleted = 0)
            OR organizationid = :orgId
            OR enclosingHire_organizationId = :orgId
            OR gln_orig_organizationid = :orgId
            OR originFacilityId in (
                select facilityid
                from v_FacilityAndBbrOrg
                where bbrorgid = :orgId
                  and (nvl(atd, etd), activeEndDate) overlaps (fromDate, toDate))
            OR gln_dest_organizationid = :orgId
            OR destinationFacilityId in (
                select facilityid
                from v_FacilityAndBbrOrg
                where bbrorgid = :orgId
                  and (nvl(atd, etd), activeEndDate) overlaps (fromDate, toDate))
            )";

protected <T> List<T> query(String key, RowMapper<T> mapper, Object... kvs) {
    Map<String, Object> params = map(kvs);
    return  jdbcTemplate.queryForList(q, params);
}


Get this bounty!!!

#StackBounty: #ldap #oracle Why does LDAP work only for 64 bit but is slow for 32 bit? Why did my 1 line SQLNET.ORA change fix my issue?

Bounty: 50

I have two Oracle 12.1 clients installed on my laptop, a 32 install and a 64 bit install. The configurations were the same for both. My SQLNET.ORA file was set up to specify to first use OID and then fail over to use my TNSAMES.OA file. The contents of the pair of files were as follows below.

My problem was that although my 64 bit client worked fine, when I connected using the 32 bit client it took about 2 minutes to make the connection. SSIS (Microsoft SQL Server Integration Services) can only use the 32 bit client and trying to develop an SSIS package using the 32 bit client was a horribly slow experience until I figured out how to resolve but would like to understand why my change worked.

–SQLNET.ORA FILE:

# This file is actually generated by netca. But if customers choose to 
# install "Software Only", this file wont exist and without the native 
# authentication, they will not be able to connect to the database on NT.
names.directory_path=(LDAP, TNSNAMES)
NAMES.DEFAULT_DOMAIN = cigna.com
# SQLNET.AUTHENTICATION_SERVICES = (NTS)

–FILE LDAP.ORA

DIRECTORY_SERVERS= (oid.gtm.internal.mycompany.com:3060)
DEFAULT_ADMIN_CONTEXT = "dc=mycompany,dc=com"
DIRECTORY_SERVER_TYPE = OID

–TNSNAMES.ORA

Contents not shown

The 64 client worked find so, I left the 3 files alone. For the 32 bit client that connected slow, I modified SQLNET.ORA line from "names.directory_path=(LDAP, TNSNAMES)" to "names.directory_path=(TNSNAMES, LDAP)" in the 32 bit version, to go straight to my TNSNAMES.ORA file to resolve the end point instead of tring to communicate with the LDAP server first.

So, it seems that there was a delay trying to communicate with the LDAP server, oid.gtm.internal.mycompany.com, for some reason, but only with the 32 bit client. When I inverted the order, it was fine for 32 bit.

My question is this: Why would it work for 64 bit but not for 32 bit and the client otherwise work?

Update
My windows PATH variable puts the 64 bit Oracle folder first. When I open a cmd window and tnsping, the 64 bit client is used and says "Used LDAP adapter to resolve the alias" in 60 msec. If I create a batch file to modify the PATH variable for the cmd session to put the 32 Oracle Client folder first and then CD into the 32 bit bin folder C:oracle_12102_32bitproduct12102_32bitCLIENT_1bin before attemmpting the tnsping, it pauses a long time and then says that it "Used TNSNAMES adapter to resolve the alias" even though the SQLNET.ORA said to use LDAP first. In this 32 test, it says that it used the TNSNAMES to resolve regardless of whether my SQLNET.ORA says to use LDAP or TNSNAMES first. If LDAP is first, it must be failing and taking a long time to do so. if I switch the order to say TNSNAMES first, then LDAP, it is fast.

So I guess the question is why does LDAP only work for 64 bit if I have copies of the same 3 files in the 32 folders? Someone gave me a line to add to – I think it was the SQLNET.ORA file to add logging to a TRACE (*.TRC) file to see why but I could not find any recent ones on my PC. (I used VOID TOOLS SEARCH EVERYTHING to quickly search my whole PC for TRACE files). I can’t remember what the exact statement ones that I since removed but it was setting some LOGGING_LEVEL to ADMIN or something like that..

Why would LDAP fail for 32 bit but not 64 bit?


Get this bounty!!!

#StackBounty: #sql-server #oracle #replication #hashing #checksum Is there a cross-platform way to compare data in one columnd on each …

Bounty: 50

I have an Oracle 12 database with lots of tables, and I am replicating several of the tables (a subset of rows) into a SQL Server 2016 database. The subset of rows can be established with a WHERE clause on the Oracle side.

I have two web services that can expose anything I want to from that data, one on each side.

Do you have a suggestion for an approach of what I can expose, then compare to find out if the data between the systems matches?

I am currently exposing from one table, which has a few million rows the COUNT(*), which is a no-brainer since it is very efficient. So far, so good.

I’m also exposing the SUM of each of a few NUMBER(18,2) columns and comparing it to the corresponding SUM on the SQL Server side. However, this is problematic, as it has to scan the whole table in SQL Server; it is sometimes blocked, and sometimes might cause other processes to block. I imagine similar problems could occur on the Oracle side too.

Also, the SUM will not tell me if the rows match–it will only tell me that the totals match; if an amount was improperly added to one row and subtracted from another, I wouldn’t catch it.

I’ve pondered whether Oracle’s STANDARD_HASH might help me, but it seems cumbersome or error-prone to try to generate the exact same HASH on the SQL Server side, and also this doesn’t help with the inefficiency/blocking nature of the call.

So is there any way to have both databases keep track of a hash, checksum, CRC code, or other summary of a column’s data, that is efficient to retrieve, and that I can then use to compare to have some idea whether data is the same on both sides? It need not be a perfect solution–for example, comparing SUMs is close but perhaps not quite good enough.

As a first stab, I created an "summary" indexed view, with columns derived from SUMs, on the SQL Server side. This makes querying the view very fast, but incurs additional penalty on every write to the large table underneath. Still, I think it will work, but I’d like to improve on it. Other, better ideas?


Get this bounty!!!

#StackBounty: #php #oracle #oci8 dyld: lazy symbol binding failed for php oic8 on Apple M1

Bounty: 100

On Intel based MacOS my installation process of php-oci8 was fine.
After I moved to the new architecture Apple M1, I got strange exception.
And can’t understand how to resolve it.

Installation process:

brew install php

cd ~/Downloads

curl -O https://download.oracle.com/otn_software/mac/instantclient/198000/instantclient-basic-macos.x64-19.8.0.0.0dbru.dmg
curl -O https://download.oracle.com/otn_software/mac/instantclient/198000/instantclient-sdk-macos.x64-19.8.0.0.0dbru.dmg
curl -O https://download.oracle.com/otn_software/mac/instantclient/198000/instantclient-sqlplus-macos.x64-19.8.0.0.0dbru.dmg

hdiutil mount instantclient-basic-macos.x64-19.8.0.0.0dbru.dmg 
hdiutil mount instantclient-sdk-macos.x64-19.8.0.0.0dbru.dmg
hdiutil mount instantclient-sqlplus-macos.x64-19.8.0.0.0dbru.dmg

/Volumes/instantclient-basic-macos.x64-19.8.0.0.0dbru/install_ic.sh

hdiutil unmount /Volumes/instantclient-basic-macos.x64-19.8.0.0.0dbru
hdiutil unmount /Volumes/instantclient-sdk-macos.x64-19.8.0.0.0dbru
hdiutil unmount /Volumes/instantclient-sqlplus-macos.x64-19.8.0.0.0dbru

sudo mkdir -p /opt/oracle
sudo mv instantclient_19_8/ /opt/oracle

pecl install oci8

# Oracle Instant Client [autodetect] : instantclient,/opt/oracle/instantclient_19_8

Build process completed successfully
Installing '/opt/homebrew/Cellar/php/8.0.1_1/pecl/20200930/oci8.so'
install ok: channel://pecl.php.net/oci8-3.0.1
Extension oci8 enabled in php.ini

When I try to check, is oci8 loaded correctly, I receive next problem:

php -i
oci8

OCI8 Support => enabled
OCI8 DTrace Support => disabled
OCI8 Version => 3.0.1
dyld: lazy symbol binding failed: Symbol not found: _OCIClientVersion
  Referenced from: /opt/homebrew/lib/php/pecl/20200930/oci8.so
  Expected in: flat namespace

dyld: Symbol not found: _OCIClientVersion
  Referenced from: /opt/homebrew/lib/php/pecl/20200930/oci8.so
  Expected in: flat namespace

Abort trap: 6

Can some one help, how to deal with this problem?

MacOS: 11.1, Apple M1, PHP: 8.0.1, OCI8 Ext: 3.0.1, Oracle Instant Client: 19.8

UPD 1: Tested on latest Oracle Client v19.8 and v12.2, the problem in the same

UPD 2: Tested via .dmg and via .zip, the problem in the same


Get this bounty!!!

#StackBounty: #oracle #datapump ambiguous "invalid operation" while importing during Oracle datapump

Bounty: 50

Let me summarize the problem first and I’ll give details of the SQL I used to get where I’m at after the summary.

I’m exporting a schema from a production AWS RDS Oracle instance, using a database link to download the file to my local development database, then running an import locally on an empty database of a freshly installed Oracle in a Docker container. The export and import use Datapump. I get a very ambiguous error message "invalid operation" with equally ambiguous details suggesting I call "DBMS_DATAPUMP.GET_STATUS" to "further describe the error". When I do, I get exactly the same ambiguous "invalid operation" with a suggestion to call "GET_STATUS" to further describe the error.

I’m at a loss of where to even begin in diagnosing and solving this problem.

Here are the detailed steps I took. I have substituted our schema name with "MY_SCHEMA" to protect the identity of our client… and if there’s any mismatch in that text, I assure you it is correct in my console and just a mistake in the substitution for this question. I used SQLDeveloper to run these commands.

  1. On the AWS RDS Oracle instance running 19g
    DECLARE
    hdnl NUMBER;
    BEGIN
    hdnl := DBMS_DATAPUMP.OPEN( operation => 'EXPORT', job_mode => 'SCHEMA', job_name=>null, version=> '18.4.0.0.0');
    DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file);
    DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);
    DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''MY_SCHEMA'')');
    DBMS_DATAPUMP.START_JOB(hdnl);
    END;
    /
  1. Connect from my local dev database (18g) to the AWS RDS instance and download the dmp file. And yes, here I connect as the schema owner and not "master". This seems to work to download the file and connecting as "master" does not, where dumping as the schema owner doesn’t work in step one; unless you can instruct me how to do that and if that would solve my problem.
    create database link to_rds connect to my_schema identified by password using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=my_schema.aljfjske.us-west-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=ORCL)))';
    
    BEGIN
    DBMS_FILE_TRANSFER.GET_FILE(
    source_directory_object       => 'DATA_PUMP_DIR',
    source_file_name              => 'my_schema.dmp',
    source_database               => 'to_rds',
    destination_directory_object  => 'DATA_PUMP_DIR',
    destination_file_name         => 'my_schema.dmp'
    );
    END;
    /
  1. Start the import while logged in as "sys" with role "sysdba" on my local database (connected to the pluggable database called "my_schema").
    DECLARE
    hdnl NUMBER;
    BEGIN
    hdnl := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
    DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.dmp', directory => 'DATA_PUMP_DIR');
    DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''MY_SCHEMA'')');
    DBMS_DATAPUMP.START_JOB(hdnl);
    end;
    /

And I get the following error:

DECLARE
hdnl NUMBER;
BEGIN
hdnl := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);
DBMS_DATAPUMP.ADD_FILE( handle => hdnl, filename => 'my_schema.dmp', directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.METADATA_FILTER(hdnl,'SCHEMA_EXPR','IN (''MY_SCHEMA'')');
DBMS_DATAPUMP.START_JOB(hdnl);
end;
Error report -
ORA-39002: invalid operation
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7297
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 7291
ORA-06512: at line 7
39002. 00000 -  "invalid operation"
*Cause:    The current API cannot be executed because of inconsistencies
           between the API and the current definition of the job.
           Subsequent messages supplied by DBMS_DATAPUMP.GET_STATUS
           will further describe the error.
*Action:   Modify the API call to be consistent with the current job or
           redefine the job in a manner that will support the specified API.

I’ve spent 6+ hrs working on this already reading Oracle docs, guides, trying things, printing more information to the console, and nothing. I get exactly the same error message with no more information. The dump file is on the system and I’m pretty sure it’s being read properly because I can call utl_file.fgetattr to get its size. I’ve also tried exporting and importing with different users. nothing. I’m totally in the dark here. Even suggestions on what to try to diagnose this would be much appreciated. This is a fresh install of Oracle Database 18g Express Edition using Oracle’s Docker container files on their GitHub account (which is pretty slick, BTW). The production system on RDS has been up for several years and I’ve exported Datapump dozens of times during those years and successfully imported it into my local 11g Express Edition installation on Fedora Linux. (Which no longer works since the production database was upgraded from 12g to 19g recently. That started me on this whole path.)


Get this bounty!!!

#StackBounty: #geospatial #oracle #plsql Trigger to update vertices table

Bounty: 50

I have an Oracle 18c table that has an SDO_GEOEMTRY column.

create table a_test_sdo_geom (id number, shape mdsys.sdo_geometry);

insert into a_test_sdo_geom (id, shape)
    values(1, sdo_util.from_wktgeometry (
               'MULTILINESTRING ((671834.096 4861699.7127, 671836.5099 4861701.9158), (671838.2206 4861700.7607, 671842.2311 4861703.3157))'));
insert into a_test_sdo_geom (id, shape)
    values(2, sdo_util.from_wktgeometry (
               'MULTILINESTRING ((671800.123 4861600.1234, 671800.1234 4861700.1234)))'));
commit;

And I have a sister table that stores the vertices of from that table as rows:

create table a_test_vertices (id number, vertex_num number, x number, y number);

insert into a_test_vertices (
select
    a.id,
    v.id as vertex_num,
    v.x,
    v.y
from
    a_test_sdo_geom a
    ,table(sdo_util.getvertices(a.shape)) v
);
commit;

        ID VERTEX_NUM          X          Y
---------- ---------- ---------- ----------
         1          1 671834.096 4861699.71
         1          2 671836.510 4861701.92
         1          3 671838.221 4861700.76
         1          4 671842.231 4861703.32
         2          1 671800.123 4861600.12
         2          2 671800.123 4861700.12

I have a trigger that will automatically update the vertices table whenever the source table’s SHAPE gets updated:

  • When a source row is created
  • When a source row is deleted
  • When the source SHAPE column is updated (but not when any of the other columns get updated)

create or replace trigger a_test_sdo_geom_trig
   before insert or update of shape or delete
   on a_test_sdo_geom
   for each row
begin
   case
      when inserting then
         insert into a_test_vertices (id, vertex_num, x, y)
         select :new.id, id, x, y
           from table(sdo_util.getvertices(:new.shape));
      when deleting then
         delete a_test_vertices
         where id = :old.id;
      when updating('shape') then
         merge into a_test_vertices a
            using (select id, x, y
                     from table(sdo_util.getvertices(:new.shape))) b
               on (a.vertex_num = b.id and a.id = :old.id)
            when matched then
               update
                  set x = b.x
                     ,y = b.y
            when not matched then
               insert(id, vertex_num, x, y)
               values(:old.id, b.id, b.x, b.y);
   end case;
end a_test_sdo_geom_trig;

Question:

Can the trigger be improved?


Get this bounty!!!

#StackBounty: #oracle #performance #wait-types #oracle-19c Reduce file header block

Bounty: 50

I’m in the middle of my first Statspack analysis, and one of the most time consuming events is Buffer Busy Waits (around 30% time). I checked out the Buffer Wait statistics, and it seems like the most time it’s waiting in File Header Block (about 10x more time than the second wait event).
Could you give me a hint how can I approach this issue? The only advice I found online in this site saying

File Header Block – Most likely extent allocation problems, look at extent size on tablespace and increase the extent size to there are few extent allocations and less contention on the File Header Block.

I was googling again about increasing extent size and I’m bit lost here. Seems that extent size is assigned to particular tablespace, and I would need to replace my tablespaces in order to change that, but that is only for the initial one. How can I increase it without replacing all tablespaces in my DB?

Is there maybe any other stuff I should look into when I see lot of File Header Block waiting time?

Cheers


Update1:

Tablespace
------------------------------
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
AA_AAA
       438,217     220    3.8     1.0      788,012      396      2,007   14.3
UNDOTBS1
             0       0    0.0               21,104       11     21,647  993.6
TBS_PERFSTAT
           143       0   40.1     1.0        3,340        2          0    0.0
SYSTEM
            26       0   14.6     1.0          114        0          0    0.0
SYSAUX
            14       0    4.3     1.0           98        0          0    0.0
AA_AAA_IDX
            74       0   15.7     1.0            2        0          4    7.5
BB_BBB
             0       0    0.0                   54        0        820  993.0
BB_BBB_IDX
             0       0    0.0                   40        0      1,527  993.5
          -------------------------------------------------------------
File IO Stats  DB/Inst: ORCL/orcl  Snaps: 9833-9834
->Mx Rd Bkt: Max bucket time for single block read
->ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
                        Av   Mx                                             Av
                 Av     Rd   Rd    Av                    Av        Buffer BufWt
         Reads Reads/s (ms)  Bkt Blks/Rd       Writes Writes/s      Waits  (ms)
-------------- ------- ----- --- ------- ------------ -------- ---------- ------
AA_AAA                   /oradata/data_files/ORCL/eim.dbf
             0       0                             54        0        820  993.0

AA_AAA_IDX               /oradata/data_files/ORCL/eim_idx.dbf
             0       0                             40        0      1,527  993.5

BB_BBB                   /oradata/data_files/ORCL/uel.dbf
       438,217     220   3.8   1     1.0      788,012      396      2,007   14.3

BB_BBB_IDX               /oradata/data_files/ORCL/uel_idx.dbf
            74       0  15.7   1     1.0            2        0          4    7.5

SYSAUX                   /oradata/data_files/ORCL/sysaux01.dbf
            14       0   4.3   1     1.0           98        0          0

SYSTEM                   /oradata/data_files/ORCL/system01.dbf
            26       0  14.6   1     1.0          114        0          0

TBS_PERFSTAT             /oradata/data_files/ORCL/statspack_data01.dbf
           143       0  40.1   1     1.0        3,340        2          0

UNDOTBS1                 /oradata/data_files/ORCL/undotbs01.dbf
             0       0                         21,104       11     21,647  993.6

          -------------------------------------------------------------
File Read Histogram Stats  DB/Inst: ORCL/orcl  Snaps: 9833-9834
->Number of single block reads in each time range
->Tempfiles are not included
->ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
    0 - 2 ms     2 - 4 ms    4 - 8 ms     8 - 16 ms   16 - 32 ms       32+ ms
------------ ------------ ------------ ------------ ------------ ------------
AA_AAA_IDX               /oradata/data_files/ORCL/uel_idx.dbf
          19            0            0            0            0            0

SYSAUX                   /oradata/data_files/ORCL/sysaux01.dbf
          10            0            0            0            0            0

AA_AAA                   /oradata/data_files/ORCL/uel.dbf
     392,355            0            0            0            0            0

TBS_PERFSTAT             /oradata/data_files/ORCL/statspack_data01.dbf
          29            0            0            0            0            0

SYSTEM                   /oradata/data_files/ORCL/system01.dbf
          18            0            0            0            0            0

          -------------------------------------------------------------


Get this bounty!!!

#StackBounty: #azure #database #oracle #azure-networking Integration of Azure with Oracle Cloud Infrastructure (OCI) ORA-03113: end-of-…

Bounty: 100

I’m trying to integrate Azure and OCI using this approach and this article.

Now, I have the infrastructure up and running. It consists of a VM in Azure, an Autonomous Database (ATP) Oracle Cloud Infrastructure (OCI), and a Java application on the VM. The application successfully connected to the database.

However, after some period of time application fails with:

ORA-03113: end-of-file on communication channel

Process ID: 86437

Session ID: 57114 Serial number: 29955

How can I identify where is the problem (Azure, OCI, etc.) in order to have an idea of how to fix it?


Get this bounty!!!

#StackBounty: #oracle #linux #sqlplus #oracle-sqlcl oracle sqlcl and sqlplus – seperate login.sql on linux

Bounty: 50

I feel I’m probably missing something obvious here, but I can’t see an easy way to have a separate login.sql for sqlplus and sqlcl

This is problematic for me as the errors thrown up by sqlplus for the set commands it doesn’t understand interfere with various scripts.

I think I could do it with aliases that change the SQLPATH prior to starting sqlcl, but would prefer a cleaner way if possible.

Is there a separate file name that only sqlcl will look for? Or separate location?


Get this bounty!!!