#StackBounty: #sql-server #jdbc How to prevent SQLServer JDBC Query with XML type caching entire resultset at query?

Bounty: 50

I have a query which generates a fairly large XML document (~30k) as a query column for each record of a large table, of the form…

SELECT recordKey, lastUpdatedDate, ( SELECT ... FOR XML PATH( 'elemName' ), TYPE )
FROM largeTable
ORDER BY lastUpdateDate

If I run this query from SQLServer management studio, the query returns almost instantly, showing the first rows, as I would expect, and continues to run in the background.

However, when I run this query from the Camel JDBC component in StreamList mode, it appears to cache the entire resultset at the point of querying, which means I run out of memory.

I’ve checked the JDBC driver properties, and explicitly set the responseBuffering property to adaptive, and have also tried setting the selectMethod to cursor, neither of which appear to make any difference to my query.

Is this a characteristic of querying XML with JDBC, or are there some parameters I need to set differently?

Get this bounty!!!

#StackBounty: #jdbc #google-apps-script #google-spreadsheet Jdbc connection error from Google Apps Script

Bounty: 100

I have created a Google Cloud Project MySQL database to use in conjunction with the Jdbc service provided by Google Apps Script. Everything went as planned with the connection. I am basically connecting as it does in the docs.

var conn = Jdbc.getCloudSqlConnection(dbUrl, user, userPwd);

I shared the file with another account and all of a sudden I am seeing a red error saying:

‘Failed to establish a database connection. Check connection string, username and password.’

Nothing changed in the code, but there is an error. When I go back to my original account and run the same bit of code, there is no error. What is happening here? Any ideas?

Get this bounty!!!

#StackBounty: #security #ssl #sql-server #jdbc SQL Server 2014 enabling TLS 1.1 along with TLS 1.2

Bounty: 50

I have a Windows Server 2012 R2 which is a DC with SQL Server 2014 (Express) updated to the latest SP2 with CU10 12.2.5571.0 (testing environment).
I disabled all protocols except TLS1.1 and TLS1.2 by setting the registry keys in the path:

HKLM SYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocols.

I initially enabled only TLS1.2 but I need also TLS1.1 for backward compatibility with a Java application which run on another machine of the same domain. In order to use TLS1.2 a specific JDBC driver is needed and a property must be provided to the JDBC url (see the discussion here).

In the meanwhile I want to let the application work with TLS1.1 so I re-enabled it. However the application can’t connect to the database and it seems like TLS1.1 is not used by the SQL Server. In order to test the connection i tried the following commands:

1) openssl s_client -connect <Server IP>:1433 -ssl3

2) openssl s_client -connect <Server IP>:1433 -tls1_1

3) openssl s_client -connect <Server IP>:1433 -tls_1_2

Strange enough 2) fails even if the only protocol enabled on both the server and client machine is TLS1.1 while 1) runs without errors even if SSL3 is disabled on both machines.

I also tried using Management Studio from the client machine and the connection only works if TLS1.2 is enabled on both the client and server. Otherwise if only TLS1.1 is enable the following error is displayed:

A connection was successfully established with the server, but then an
error occurred during the login process. (provider: SSL Provider,
error: 0 – The client and server cannot communicate, because they do
not possess a common algorithm.)

How can it possible ?

Do I need to force something on the Secure Channel ?

Why does the SQL Server doesn’t allow TLS1.1 connections anymore ?

Get this bounty!!!

#StackBounty: #java #spring #jdbc #c3p0 #java-melody how to configure JavaMelody to Monitor Jdbc Conections in C3p0 DataSource

Bounty: 50

I’m using Spring configuration file to configure C3P0. To monitor DataSource i configured net.bull.javamelody.SpringDataSourceFactoryBean as mentioned in the user guide of javamelody.But my report is showing 0 Active jdbc connections where as my minPoolSize is 10.What did i miss.

In web.xml added monitoring-spring.xml


In Spring jdbc Configuration file is

<bean id="sql2oDatasource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
    <property name="driverClass" value="#{dbProps['ops.jdbc.driverClassName']}"/>
    <property name="jdbcUrl" value="#{dbProps['ops.jdbc.url']}"/>
    <property name="user" value="#{dbProps['ops.jdbc.username']}"/>
    <property name="password" value="#{dbProps['ops.jdbc.password']}"/>
    <property name="maxPoolSize" value="#{dbProps['ops.c3p0.max_size']}" />
    <property name="minPoolSize" value="#{dbProps['ops.c3p0.min_size']}" />
    <property name="maxStatements" value="#{dbProps['ops.c3p0.max_statements']}" />
    <property name="checkoutTimeout" value="#{dbProps['ops.c3p0.timeout']}" />
    <property name="preferredTestQuery" value="SELECT 1"/>
<!-- Configuring the session factory for SQL-2-O -->
<bean id="sql2oSession" class="org.sql2o.Sql2o">
    <constructor-arg ref="wrappedDBDataSource" />
    <constructor-arg value="PostgreSQL" type="org.sql2o.QuirksMode" />
<bean id="wrappedDBDataSource" class="net.bull.javamelody.SpringDataSourceFactoryBean" primary="true">
    <property name="targetName" value="sql2oDatasource" />

I tried to pass DriverClass as net.bull.javamelody.JdbcDriver in datasource and driver as

   <property name="properties">
            <prop key="driver">org.postgresql.Driver</prop>

But postgresql driver is not getting registered this way.

Get this bounty!!!

#StackBounty: #mysql #r #jdbc #timeout #dbi Is there a way to timeout a MySql query when using DBI and dbGetQuery?

Bounty: 50

I realize that

dbGetQuery comes with a default implementation that calls dbSendQuery, then dbFetch, ensuring that the result is always freed by dbClearResult.


dbClearResult frees all resources (local and remote) associated with a result set. In some cases (e.g., very large result sets) this can be a critical step to avoid exhausting resources (memory, file descriptors, etc.)

But my team just experienced a locked table that we went into MySQL to kill pid and I’m wondering – is there a way to timeout a query submitted using the DBI package?

I’m looking for and can’t find the equivalent of

dbGetQuery(conn = connection, 'select stuff from that_table', timeout = 90)

I tried this, and profiled the function with and without the parameter set and it doesn’t appear it does anything; why would it, if dbClearResult is always in play?

Get this bounty!!!

#StackBounty: #java #mysql #apache-spark #jdbc #amazon-s3 Converting mysql table to spark dataset is very slow compared to same from cs…

Bounty: 50

I have csv file in Amazon s3 with is 62mb in size (114 000 rows). I am converting it into spark dataset, and taking first 500 rows from it. Code is as follow;

DataFrameReader df = new DataFrameReader(spark).format("csv").option("header", true);
Dataset<Row> set=df.load("s3n://"+this.accessId.replace(""", "")+":"+this.accessToken.replace(""", "")+"@"+this.bucketName.replace(""", "")+"/"+this.filePath.replace(""", "")+"");


The whole operation takes 20 to 30 sec.

Now I am trying the same but rather using csv I am using mySQL table with 119 000 rows. MySQL server is in amazon ec2. Code is as follow;

String url ="jdbc:mysql://"+this.hostName+":3306/"+this.dataBaseName+"?user="+this.userName+"&password="+this.password;

SparkSession spark=StartSpark.getSparkSession();

SQLContext sc = spark.sqlContext();

DataFrameReader df = new DataFrameReader(spark).format("csv").option("header", true);
Dataset<Row> set = sc
            .option("url", url)
            .option("dbtable", this.tableName)

This is taking 5 to 10 minutes.
I am running spark inside jvm. Using same configuration in both cases.

My issue is not how to decrease the required time as I know in ideal case spark will run in cluster but what I can not understand is why this big time difference in the above two case?

Get this bounty!!!

How to pass Service Name or SID in JDBC URL

Below is the way to send either Service Name or SID in JDBC URL to connect to Oracle SQL

For Service Name

For example:


For example:

Oracle DOC

Best practices in JDBC Connection

JDBC Connection Scope

How should your application manage the life cycle of JDBC connections? Asked another way, this question really asks – what is the scope of the JDBC connection object within your application? Let’s consider a servlet that performs JDBC access. One possibility is to define the connection with servlet scope as follows.

import java.sql.*;

public class JDBCServlet extends HttpServlet {

    private Connection connection;

    public void init(ServletConfig c) throws ServletException {
      //Open the connection here

    public void destroy() {
     //Close the connection here

    public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException { 
      //Use the connection here
      Statement stmt = connection.createStatement();
      //do JDBC work.

Using this approach the servlet creates a JDBC connection when it is loaded and destroys it when it is unloaded. The doGet() method has immediate access to the connection since it has servlet scope. However the database connection is kept open for the entire lifetime of the servlet and that the database will have to retain an open connection for every user that is connected to your application. If your application supports a large number of concurrent users its scalability will be severely limited!

Method Scope Connections

To avoid the long life time of the JDBC connection in the above example we can change the connection to have method scope as follows.

public class JDBCServlet extends HttpServlet {

  private Connection getConnection() throws SQLException {
    // create a JDBC connection

  public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException { 
    try {
      Connection connection = getConnection();
    catch (SQLException sqlException) {

This approach represents a significant improvement over our first example because now the connection’s life time is reduced to the time it takes to execute doGet(). The number of connections to the back end database at any instant is reduced to the number of users who are concurrently executing doGet(). However this example will create and destroy a lot more connections than the first example and this could easily become a performance problem.

In order to retain the advantages of a method scoped connection but reduce the performance hit of creating and destroying a large number of connections we now utilize connection pooling to arrive at our finished example that illustrates the best practices of connecting pool usage.

import java.sql.*;
import javax.sql.*;

public class JDBCServlet extends HttpServlet {

  private DataSource datasource;

  public void init(ServletConfig config) throws ServletException {
    try {
      // Look up the JNDI data source only once at init time
      Context envCtx = (Context) new InitialContext().lookup("java:comp/env");
      datasource = (DataSource) envCtx.lookup("jdbc/MyDataSource");
    catch (NamingException e) {

  private Connection getConnection() throws SQLException {
    return datasource.getConnection();

  public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException {
    Connection connection=null;
    try {
      connection = getConnection();
    catch (SQLException sqlException) {
    finally {
      if (connection != null) 
        try {connection.close();} catch (SQLException e) {}

This approach uses the connection only for the minimum time the servlet requires it and also avoids creating and destroying a large number of physical database connections. The connection best practices that we have used are:

A JNDI datasource is used as a factory for connections. The JNDI datasource is instantiated only once in init() since JNDI lookup can also be slow. JNDI should be configured so that the bound datasource implements connecting pooling. Connections issued from the pooling datasource will be returned to the pool when closed.

We have moved the connection.close() into a finally block to ensure that the connection is closed even if an exception occurs during the doGet() JDBC processing. This practice is essential when using a connection pool. If a connection is not closed it will never be returned to the connection pool and become available for reuse. A finally block can also guarantee the closure of resources attached to JDBC statements and result sets when unexpected exceptions occur. Just call close() on these objects also.

For More details :