#StackBounty: #sql-server #tsql #pivot #ssms #export-to-excel Export Multiple Dynamically Pivoted SQL Query Results to Excel

Bounty: 100

I have a batch query that I execute from SQL Server Management Studio (v18.5) that has multiple dynamic pieces and generates multiple query results that I would like to export to Excel automatically from the SSMS console, if possible.

In the batch query, I first select a unique set values from 1 column in a table, and I iterate through that list to build a dynamic pivot query using the value from the list. Each pivot query will have a different set of results for each value.

For example: unique list that I loop through:

Type
-----
Fan
Compressor
Belt
Motor
Filter

The pivot query results for Fan will have a unique set of columns that are different from the pivot query results for Compressor.

Fan pivot columns

FanID, Speed, Weight, Blade Size, RPM

Compressor pivot columns

CompressorID, HP, Voltage, Amps, Height, 

Each time I loop through the Type list, I would like to export the pivoted query results to an Excel file. Ideally, I would like to export to 1 workbook with each Type’s pivoted query results having its own worksheet. Since each pivoted query result will have a difference set of columns, I cannot compile all of the query results into 1 table and then export to Excel (or csv). Trying to avoid having multiple files, one for each type.

UPDATE:

I have added the following to my batch query:

INSERT INTO OPENROWSET(''Microsoft.ACE.OLEDB.12.0'',''Excel 12.0; Database=C:temptesxt.xlsx;'',''SELECT * FROM [Sheet1$]'')

But I’m getting the following error:

Cannot create an instance of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)"

I ran these two commands prior to attempting the code above:

EXEC sp_configure 'show advanced options', 1
RECONFIGURE
EXEC sp_configure 'Ad Hoc Distributed Queries', 1
RECONFIGURE

UPDATE:
I was hoping there was a down and dirty export option that would eliminate the need to write an application.

I tried the BCP approach, but I was unable to get it to work. I changed the server name to mine and the database name to mine; no luck. The @sql variable is my dynamic pivot query. Tried different bcp command parameters with no luck.

enter image description here

enter image description here

UPDATE:
Getting a little closer; I replaced @sql from the bcp command above with a simple select query and the command executed, creating a new .csv file for each "Type" in the type list; there must be something in the complex dynamic pivot @sql string that bcp doesn’t like. Regardless, I know the underlying plumbing of having bcp export query results to a csv file is working.


Get this bounty!!!

#StackBounty: #sql-server #linux #docker Copy content of SQL Server 2016 db

Bounty: 50

I would like to copy whole content of database: schema & data between two SQL Server servers via a scrip, preferably embedded in linux docker image.
Copy would be used on test environment for testing purposes.

I have few constraints:

  1. I’m able to able to reach DBs selected via sqlcmd protocol.
  2. I don’t have access to files on servers
  3. I would like to execut script from linux

What solutions I discarded:

  • Backup (bak) files as I don’t have file access.
  • Bacpac – as according to docs i don’t have necessary permissions.

What i think may work but is seems over engineered:

  • Use sqlpackage to create DAC file as it is available on linux.
  • Use dac to update structure on target database
  • Use bcp from Mssql Tools to copy data.

Solution could fail when because of update of structure failure.

Is there any other option which i overlooked?
Or better idea?

EDIT:
I’v implemented solution as described above( sqlpackage + bcp ) and dockerized it HERE, all on linux.

Still, looking for better approach.


Get this bounty!!!

#StackBounty: #sharepoint-online #sql-server #sharepoint-list #sql #sql-server-2012 Getting SharePoint List data into a SQL Server tabl…

Bounty: 50

I have been hitting dead ends when trying to pull SharePoint List data into SQL Server.
The goal is to query the data so I am able to use it in a Tableau dashboard. The reason we have to do it this was is a 2 fold problem.

1st. The Tableau connection works but one of the columns is returning all null values instead of the double that is in that column.

2nd. I was able to pull this data in to MS Access and then to Tableau but this will not work from an automation standpoint so we need this data to be pulled into our SQL Server daily via a job.

I can manage all the automation once the connection is made but I cannot find a way to connect a list to my SQL Server.

I have tried everything online to fix this issue. OData connection, Stored procedure that does not appear to exist for SSIS connection it as a linked server. All ideas have failed.

Does anyone know a working way to pull data from SharePoint into SQL Server?


Get this bounty!!!

#StackBounty: #python #sql-server #sqlalchemy #pyodbc #presto Connecting to jTDS Microsoft server with SQLalchemy and Presto

Bounty: 50

I’m trying to connect to an oldschool jTDS ms server for a variety of different analysis tasks. Firstly just using Python with SQL alchemy, as well as using Tableau and Presto.

Focusing on SQL Alchemy first at the moment I’m getting an error of:

Data source name not found and no default driver specified

With this, based on this thread here Connecting to SQL Server 2012 using sqlalchemy and pyodbc

i.e,

import urllib
params = urllib.parse.quote_plus("DRIVER={FreeTDS};"
                                 "SERVER=x-y.x.com;"
                                 "DATABASE=;"
                                 "UID=user;"
                                 "PWD=password")

engine = sa.create_engine("mssql+pyodbc:///?odbc_connect={FreeTDS}".format(params))

Connecting works fine through Dbeaver, using a jTDS SQL Server (MSSQL) driver (which is labelled as legacy).

Curious as to how to resolve this issue, I’ll keep researching away, but would appreciate any help.

I imagine there is an old drive on the internet I need to integrate into SQL Alchemy to begin with, and then perhaps migrating this data to something newer.

Appreciate your time


Get this bounty!!!

#StackBounty: #sql-server #index #sql-server-2016 #nonclustered-index #columnstore Can adding a columnstore index to a table affect rea…

Bounty: 50

I’m doing some testing of columnstore indexing on a single table that has about 500 million rows.
The performance gains on aggregate queries have been awesome (a query that previously took about 2 minutes to run now runs in 0 seconds to aggregate the entire table).

But I also noticed another test query that leverages seeking on an existing rowstore index on the same table is now running 4x as slow as it previously did before creating the columnstore index. I can repeatedly demonstrate when dropping the columnstore index the rowstore query runs in 5 seconds, and by adding back in the columnstore index the rowstore query runs in 20 seconds.

I’m keeping an eye on the actual execution plan for the rowstore index query, and it’s almost exactly the same in both cases, regardless if the columnstore index exists. (It uses the rowstore index in both cases.)

The rowstore test query is:

SELECT *
INTO #TEMP
FROM Table1 WITH (FORCESEEK)
WHERE IntField1 = 571
    AND DateField1 >= '6/01/2020'

The rowstore index used in this query is: CREATE NONCLUSTERED INDEX IX_Table1_1 ON Table1 (IntField1, DateField1) INCLUDE (IntField2)

The columnstore test query is:

SELECT COUNT(DISTINCT IntField2) AS IntField2_UniqueCount, COUNT(1) AS RowCount
FROM Table1
WHERE IntField1 = 571 -- Some other test columnstore queries also don't use any WHERE predicates on this table
    AND DateField1 >= '1/1/2019' 

The columnstore index is: CREATE NONCLUSTERED COLUMNSTORE INDEX IX_Table1_2 ON Table1 (IntField2, IntField1, DateField1)

Here is the execution plan for the rowstore index query before I create the columnstore index:
Execution Plan - Rowstore Index - Pre-Columnstore Index Creation

Here is the execution plan for the rowstore index query after I create the columnstore index:
Execution Plan - Rowstore Index - Post-Columnstore Index Creation

The only differences I notice between the two plans is the Sort operation’s warning goes away after creating the columnstore index, and the Key Lookup and Table Insert (#TEMP) operators take significantly longer.

Here is the Sort operation’s info with the warning (before creating the columnstore index):
Sort Operation - Warning

Here’s the Sort operation’s info without the warning (after creating the columnstore index):
Sort Operation

I would’ve thought a read query that is specifically leveraging the same rowstore index and execution plan in both cases should have roughly the same performance on every run, regardless of what other indexes exist on that table. What gives here?

Edit: Here’s the TIME and IO stats before creating the index:
Stats - Before Columnstore Index Creation

Here’s the stats after creating the columnstore index:
Stats - After Columnstore Index Creation


Get this bounty!!!

#StackBounty: #sql-server #query-performance #sql-server-2014 #optimization #cardinality-estimates Changing database compatibility from…

Bounty: 50

Just seeking an expert/practical advise from DBA point of view where one of our application DB running on SQL 2014 after migration had old DB compatibility level i.e 100.(SQL2008)

From DEV point of view all the testing has been done and they dont see much diff and want to move to prod based on their testing.

In our testing ,For certain process where we see slowness like in SP’s we found the part of statement that was slow and added query traceon hint , something like below keeping compat to 120, which helps keeping performance stable

SELECT  [AddressID],
    [AddressLine1],
    [AddressLine2]
FROM Person.[Address]
WHERE [StateProvinceID] = 9 AND
    [City] = 'Burbank'
OPTION (QUERYTRACEON 9481);
GO

UPDATE- Editing question based on more findings-

Actually we found things getting worst for a table which calls scalar function within a computed column-

below is how that column looks

CATCH_WAY AS ([dbo].[fn_functionf1]([Col1])) PERSISTED NOT NULL

and part of query where it goes weird is somewhat looking like below

DELETE t2
   OUTPUT del.col1
          del.col2
          del.col3
   INTo #temp1
FROM #temp2 t2
INNER JOIN dbo.table1 tb1 on tb1.CATCH_WAY = ([dbo].[fn_functionf1](t2.[Col1])
AND t2.[col2] = tb1.[col2]
AND t3.[col3] = tb1.[col3]
AND ISNULL (t2.[col4],'') = ISNULL (tb1.[col4],'')

I know function is being called and is slow but the problem is with current compat i.e. 100 runs OK’ish slow but when changed to 120 it gets X100 times slow
What is happening ?


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2014 #mirroring #compatibility-level Compatibility mode change does not reflect on secondaries

Bounty: 50

I was of belief that changing compatibility mode on primary database in database mirroring would make the change for mirror database.

But it proved to be wrong. I queried using sys.databases on Primary where i changed compatibility mode to 120 , however on its mirrored database compatibility level was still 100. Why?

Am i using the incorrect fn here sys.databases?

Also if it does not change what does it mean, do i need to do failover/failback to reflect?

What if i had log shipping secondary database in read only as well where this change needs to reflect? Do i need to rebuild log shipping then?

Thanks


Get this bounty!!!

#StackBounty: #sql-server #migration #active-directory SQL Server 20002005 compatibility with Active Directory 2016

Bounty: 50

We are currently using Active Directory 2008R2 and will be upgrading to AD 2016. I’m trying to determine if there is any known compatibility issues when running older versions of SQL Server (2000 and 2005) when upgrading to AD 2016. Has anyone been through this process? Most of our db servers are using SQL 2008 > 2016, but a few still runs with 2000/2005. Thanks


Get this bounty!!!

#StackBounty: #sql-server #migration #active-directory SQL Server 20002005 compatibility with Active Directory 2016

Bounty: 50

We are currently using Active Directory 2008R2 and will be upgrading to AD 2016. I’m trying to determine if there is any known compatibility issues when running older versions of SQL Server (2000 and 2005) when upgrading to AD 2016. Has anyone been through this process? Most of our db servers are using SQL 2008 > 2016, but a few still runs with 2000/2005. Thanks


Get this bounty!!!

#StackBounty: #sql-server #migration #active-directory SQL Server 20002005 compatibility with Active Directory 2016

Bounty: 50

We are currently using Active Directory 2008R2 and will be upgrading to AD 2016. I’m trying to determine if there is any known compatibility issues when running older versions of SQL Server (2000 and 2005) when upgrading to AD 2016. Has anyone been through this process? Most of our db servers are using SQL 2008 > 2016, but a few still runs with 2000/2005. Thanks


Get this bounty!!!