#StackBounty: #sql-server #reporting-services #ssrs-2012 SSRS user access to specific folders

Bounty: 100

I’d appreciate if anyone could provide a clear description on SSRS user access configuration.

I’ve installed latest SSRS and its database on a single server, set the web portal and everything works fine, except I cannot grant access to a specific user group for a specific folder. Till now all the users have access to everything.

I’ve been struggling with this for several weeks and still couldn’t find any tutorial.


Get this bounty!!!

#StackBounty: #asp.net #sql-server #asp.net-mvc #elmah #elmah.mvc ELMAH fails to login to DB on production but not test server

Bounty: 50

First off, here’s a few of the potential duplicates that didn’t help:
Can't access /elmah on production server with Elmah MVC?
ELMAH doesn't insert error logs to SQL DB on production
ELMAH works on local machine, but not on production
Elmah, convert to .Net4 vs2010, run on server 2008, does not work
Cannot open database "test" requested by the login. The login failed. Login failed for user 'xyzASPNET'
The error "Login failed for user 'NT AUTHORITYIUSR'" in ASP.NET and SQL Server 2008


My ASP.NET MVC application uses one SQL Server database and ELMAH uses another. On the test server, the IIS app pool account is used to connect to SQL Server, as intended, and everything works fine. However, moving to production with the same settings has Entity Framework continuing to use the IIS APPPOOLProduct Windows account to connect to SQL Server, but ELMAH starts attempting to use the DOMAINMACHINENAME$ account (which I didn’t create/set up — I don’t know enough about Windows domains to know where that comes from).

Attempting to go to the ELMAH page that lists errors (i.e. http://localhost/elmah) returns HTTP 500 and the event log shows this classic error message:

Cannot open database “Errors” requested by the login. The login failed. Login failed for user ‘DOMAINMACHINENAME$’

Out of desperation, I’ve tried just giving DOMAINMACHINENAME$ database permissions (used SSMS to map the login to the Errors database and selected db_datawriter and db_datareader), including the stored procedures suggested here, but even that doesn’t work. That process is what I did to set it up on the test server.

Production is Windows Server 2012 R2 running IIS 8.5. Test is Windows 10 running IIS 10. Both are running SQL Server 2016 Express v13, for now.

Web.config with everything included that’s even slightly related to this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
    <sectionGroup name="elmah">
      <section name="security" requirePermission="false" type="Elmah.SecuritySectionHandler, Elmah" />
      <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah" />
      <section name="errorMail" requirePermission="false" type="Elmah.ErrorMailSectionHandler, Elmah" />
      <section name="errorFilter" requirePermission="false" type="Elmah.ErrorFilterSectionHandler, Elmah" />
    </sectionGroup>
  </configSections>
  <connectionStrings>
    <clear />
    <!-- Entity Framework connection, works fine on both servers -->
    <add name="DbContext" connectionString="Data Source=.SQLEXPRESS;AttachDbFilename=|DataDirectory|Data.mdf;Initial Catalog=Data;Integrated Security=True" providerName="System.Data.SqlClient" />
    <!-- ELMAH connection, only works on test server -->
    <add name="ElmahLog" connectionString="Data Source=.SQLEXPRESS;AttachDbFilename=|DataDirectory|Errors.mdf;Initial Catalog=Errors;Integrated Security=True" providerName="System.Data.SqlClient" />
  </connectionStrings>
  <appSettings>
    <add key="elmah.mvc.disableHandler" value="false" />
    <add key="elmah.mvc.disableHandleErrorFilter" value="false" />
    <add key="elmah.mvc.requiresAuthentication" value="true" />
    <add key="elmah.mvc.IgnoreDefaultRoute" value="true" />
    <add key="elmah.mvc.allowedRoles" value="Developer" />
    <add key="elmah.mvc.route" value="dev/elmah" />
    <add key="elmah.mvc.UserAuthCaseSensitive" value="true" />
  </appSettings>
  <system.web>
    <globalization culture="en-US" />
    <authentication mode="Windows" />
    <compilation debug="false" targetFramework="4.5.2">
      <assemblies>
        <add assembly="System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
      </assemblies>
    </compilation>
    <httpRuntime targetFramework="4.5.2" />
    <httpModules>
      <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" />
      <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" />
      <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" />
    </httpModules>
    <customErrors mode="Off" />
  </system.web>
  <system.webServer>
    <modules>
      <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" preCondition="managedHandler" />
      <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" preCondition="managedHandler" />
      <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" preCondition="managedHandler" />
    </modules>
    <validation validateIntegratedModeConfiguration="false" />
  </system.webServer>
  <entityFramework>
    <defaultConnectionFactory type="System.Data.Entity.Infrastructure.SqlConnectionFactory, EntityFramework" />
    <providers>
      <provider invariantName="System.Data.SqlClient" type="System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer" />
    </providers>
  </entityFramework>
  <elmah>
    <security allowRemoteAccess="true" />
    <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="ElmahLog" />
  </elmah>
  <system.codedom>
    <compilers>
      <compiler language="c#;cs;csharp" extension=".cs" type="Microsoft.CodeDom.Providers.DotNetCompilerPlatform.CSharpCodeProvider, Microsoft.CodeDom.Providers.DotNetCompilerPlatform, Version=1.0.8.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" warningLevel="4" compilerOptions="/langversion:6 /nowarn:1659;1699;1701" />
      <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" type="Microsoft.CodeDom.Providers.DotNetCompilerPlatform.VBCodeProvider, Microsoft.CodeDom.Providers.DotNetCompilerPlatform, Version=1.0.8.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" warningLevel="4" compilerOptions="/langversion:14 /nowarn:41008 /define:_MYTYPE=&quot;Web&quot; /optionInfer+" />
    </compilers>
  </system.codedom>
</configuration>  

Why is ELMAH using a different account and why can’t I give that account permissions?


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2016 #performance-tuning #data-warehouse #slowly-changing-dimension Create Slowly Changing Dimens…

Bounty: 50

I am receiving transaction snapshot data from a file, it contains a history of repeated data. Currently trying to find Slowly Changing Dimensions on table with [ProductId] Business Key. Many attributes exist: ProductTitle, Category, this is a sample table, actually have around 10 more attributes. How do I create a Product Slowly Changing Dimension table query?

Searching for a performance optimized way, if I have 10 columns, not sure if Group By on 10 columns is optimal

With SQL 2016, is there a function to obtain this data? Should I use Lead/Lag Function? FirstValue/Last Value? New analytics syntax? An attempted query is below.

Note: Data comes from a 1970 legacy file system containing historical data.

Data:

create table dbo.Product
(
    ProductId int,
    ProductTitle varchar(55),
    ProductCategory varchar(255),
    Loaddate datetime
)

insert into dbo.Product
values 
 (1,'Table','ABCD','3/4/2018')
,(1,'Table','ABCD','3/5/2018')
,(1,'Table','ABCD','3/5/2018')
,(1,'Table','ABCD','3/6/2018')
,(1,'Table','XYZ','3/7/2018')
,(1,'Table','XYZ','3/8/2018')
,(1,'Table','XYZ','3/8/2018')
,(1,'Table','XYZ','3/9/2018')
,(1,'Table-Dinner', 'GHI','3/10/2018')
,(1,'Table-Dinner', 'GHI','3/11/2018')
....more data with ProductId =2,3,4, etc

Current Repeated Data in File:

enter image description here

Expected Output:

enter image description here

Attempted Query

(seems to be inefficient, especially when having 10 attribute columns)

select
    product.Productid
    ,product.ProductTitle
    ,product.ProductCategory
    ,min(product.LoadDate) as BeginDate
    ,case when max(product.LoadDate)  = (select max(subproduct.LoadDate) from dbo.Product subproduct where subproduct.productid = product.productid) then '12/31/9999' else max(product.loadDate) end as EndDate
from dbo.Product product
group by Productid, ProductTitle, ProductCategory


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2016 #fragmentation #entity-framework SQL Server one-to-many and index fragmentation

Bounty: 50

I’m currently writing code to update an entities child collection (one-to-many relationship), and while thinking about how to write code to determine which entities have been added/removed/modified, I realized that not only is it easier to just recreate the whole list, but probably better for SQL server performance?

Let’s say I have a table Student with a one-to-many relationship to Course, and I have there 2 courses Math and Physics, with primary keys 1 and 2 respectively, and obviously a foreign key to Student which is 1.

If I one day decide to update the first course, remove the other, and add a new one, I would end up with the following entities

  • Math (upated), primary key is: 1
  • Physics (deleted), primary key was: 2
  • History (added), primary key is: 5000

Now the 2 courses are no longer next to each other, and this probably causes big performance problems in the long run since my keys are now fragmented, is this correct?

A related question is whether I should have a primary key on the Course table at all? I usually add it by default to all of my (except association) tables, but in this case I never query courses individually, but always in the context of a student. Does it make sense to keep the primary key, even if I know I’ll never reference it from anywhere? Would the lack of a primary key hurt performance?

EDIT: I’ve decided to keep the primary key, in-case in the future I do need to query this child table without the parent.

My primary keys are integers and have a clustered index (default on SQL Server) and I’m using SQL Server 2016, and Entity Framework 6 as my ORM.

UPDATE:

Here’s my CREATE TABLE from SSMS. I’m using EF Code-First so I didn’t write any of this by hand.

CREATE TABLE [dbo].[Courses](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [Description] [nvarchar](max) NOT NULL,
    [StudentId] [uniqueidentifier] NOT NULL,
    [Fk1Id] [int] NOT NULL,
    [Fk2Id] [int] NOT NULL,
    [Fk3Id] [int] NULL,
 CONSTRAINT [PK_dbo.Courses] PRIMARY KEY CLUSTERED 
 (
     [Id] ASC
 )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
 ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbo].[Courses]  WITH CHECK ADD  CONSTRAINT [FK_dbo.Courses_dbo.Students_StudentId] FOREIGN KEY([StudentId])
REFERENCES [dbo].[Students] ([Id])
GO

ALTER TABLE [dbo].[Courses] CHECK CONSTRAINT [FK_dbo.Courses_dbo.Students_StudentId]
GO
--Omitted other alter tables for Fk1Id,Fk2Id, and Fk3Id

StudentId is the foreign key that I will always use to fetch these rows, they are not relevant in any other context (at least for now).

I have renamed the table and column names for privacy reasons, and removed 3 foreign key columns for simplicity! I hope they’re not too relevant.


Get this bounty!!!

#StackBounty: #sql-server #azure-sql-database #active-directory #azure Service Principal as SQL Active Directory Admin, does it use Gra…

Bounty: 50

I’m looking to use a service principal as the server admin, so it can be used in a release pipeline to create further active directory users.

I’m successfully able to make the service principal the server admin and connect to the database using an Access token, so the service principal authentication works fine, which is great nice and an interesting challenge.

However when creating further users, they never seem to be found with the error:

“Principal ‘name here’ could not be found at this time. Please try again later.”

However when logged in as myself I am able to create other users fine.

Which leads me to think is SQL Server using some sort of Active Directory impersonation to validate the subsequent users / service principals exist?

If I were to grant my service principal access to the Graph API would that give it enough permissions? If so what is the minimum rights necessary?

I also see that Azure SQL Server has created a managed service identity in the background, so I wonder if that’s also at play?

Also I saw on managed instances that they explicitly require a service principal with AD rights to hook into AD – which I presume is like an Azure SQL Server’s MSI? But Azure SQL Server doesn’t mention this with an AD admin being enough.

I’d love to know how the underlying mechanism for the active directory hookup works as it could help me investigate my challenges further.


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2016 #availability-groups #transactional-replication Configure Always On for publisher

Bounty: 50

We have a pull transaction replication.Publisher runs MSSQL 2016.
Distributer runs on separate server with MSSQL 2016.
Is it possible to define Always On over publisher and another new server, so that publisher will become the primary in AG.
Most important that subscribers will keep going.
We are ok with disabling replication for several hours, but we would like to keep it without recreating.
(I could only find articles on defining replication above Always ON, which is the contrary of my request)


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2012 #availability-groups Availability Group reporting disconnected replica

Bounty: 100

I’m trying to configure availability groups in a VM environment so I can run some tests.

I think I’ve got the group created correctly, I can see the Always on group on both servers. However when I look at the dashboard for the group it has the following error

Availability replica disconnected

This secondary replica is not connected to the primary replica. The connected state is DISCONNECTED.

I’ve checked the endpoints on both servers and they look correct. There are no firewalls running and both servers can see each other. What’s the best way to debug this sort of error?

Below is the TSQL I used to set all this up

Primary Server

CREATE ENDPOINT dbm_endpoint
    STATE=STARTED 
    AS TCP (LISTENER_PORT=7022) 
    FOR DATABASE_MIRRORING (ROLE=ALL)
GO

Secondary Server

CREATE ENDPOINT dbm_endpoint
    STATE=STARTED 
    AS TCP (LISTENER_PORT=5022) 
    FOR DATABASE_MIRRORING (ROLE=ALL)
GO

Primary Server

CREATE AVAILABILITY GROUP AG1
    FOR
        DATABASE TestDb
    REPLICA ON
        'SQL1' WITH
            (
                ENDPOINT_URL = 'TCP://sql1.sql.sandbox.net:7022',
                PRIMARY_ROLE ( ALLOW_CONNECTIONS = READ_WRITE),
                SECONDARY_ROLE (ALLOW_CONNECTIONS=READ_ONLY),
                AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
                FAILOVER_MODE = MANUAL
            ),
        'SQL2' WITH
            (
                ENDPOINT_URL = 'TCP://sql2.sql.sandbox.net:5022',
                PRIMARY_ROLE ( ALLOW_CONNECTIONS = READ_WRITE),
                SECONDARY_ROLE (ALLOW_CONNECTIONS=READ_ONLY),
                AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
                FAILOVER_MODE = MANUAL
            );

Secondary Server

ALTER AVAILABILITY GROUP AG1 JOIN;

Obviously I also restored the primary database to the secondary server as well.

One thought, I didn’t install the SQL Agent on either server, I’m guessing this is not needed for always on availability groups?


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2012 #sql-server-2008 #backup #compatibility-level Backing up a SQL 2008 DB, then restoring, give…

Bounty: 50

We have four different databases on a SQL Server 2008 (not SP2 – don’t ask me why) database server that have an odd problem.

When listed from sys.databases, these databases report that their compatibility is SQL Server 2008/R2 (compatibility level 100) as shown for these three of the DBs here:

MyDB_Name               100 SQL Server 2008/R2
MyDB_Name_MirrorTables  100 SQL Server 2008/R2
MyDB_Name_Reporting     100 SQL Server 2008/R2

As output from this query:

select 
    name, compatibility_level , version_name = 
    case compatibility_level
        when 90  then 'SQL Server 2005'
        when 100 then 'SQL Server 2008/R2'
        when 110 then 'SQL Server 2012'
        else 'unknown - ' + convert(varchar(10), compatibility_level)
    end
from sys.databases

Restores from backups of other user DBs on the server restore correctly as compatibility level 100. However, restores of these four DBs from backups restore as compatibility level 90 (SQL Server 2005). Only these four DBs appear to have this issue.

To test this, I took a manual backup of one of these DBs with only the options INIT, SKIP specified. I then restored this backup to a SQL Server 2012 server. When restored, the DB went through the upgrade process from Version 655 to 706. Nothing unusual happened.

However, when looking at the compatibility level on SQL Server 2012, using the same code as above, the information showed up as:

bwh_MyDB_Name_MirrorTables  90  SQL Server 2005

Additionally, when restored to the original SQL Server 2008 DB server under a different name, the database still returns SQL Server 2005 (compatibility level 90) for the DB version, even though it was backed up on the same server as compatibility level 100.

Finally, although the DB reports as SQL Server 2008, when returning a date column from a DB2 linked server, the query returns a datetime (with 00:00:00.000 for the time). Running the query in SQL Server 2012, the value is returned as a Date data type, which is what it should be returned as in a SQL Server 2008/R2 database.

In my last tests, I added a test table to the DB containing a date and a time field, and inserted a row of data. Then I backed up and restored the DB again. The table worked correctly, though the compatibility level was 90. I also created a table with date, time, and datetime2 fields with no problems.

I’m at a loss where to look next. The date/datetime problem was the original trigger for looking at this, but it has become a much larger puzzle. Obviously, I could simply CAST the incoming data as a date value (which does work, oddly enough), but this doesn’t explain the RESTORE returning a DB to compatibility level 90, or the DB2 query returning a datetime instead of a date from the linked server. Any suggestions would be gladly accepted.


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2016 How does the Microsoft tablediff utility work?

Bounty: 50

Our team is curious about the architecture of tablediff utility. We want to find the difference between table rows.

Microsoft mentions tablediff here, however does not state how it works. Does tablediff take the checksum or hash of every row and compare tables? What is the internal algorithm method?

https://docs.microsoft.com/en-us/sql/tools/tablediff-utility?view=sql-server-2017

create table dbo.CustomerTransaction
(
    CustomerTransactionId int primary key,
    CustomerName varchar(50),
    ProductName varchar(50),
    QuantityBought int
)

Using tablediff example:

So row values Table 1: (1,'Bob','Table',8) is the Same as Table 2: (1,'Bob','Table',8)

These are different (1,'Bob','Table',8), (1,'Bob','Chair',8), different on the primary key.

I know it requires the source table to have primary Key/Identity/rowguid column to compare, and was originally based on replication technology.


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2016 #ssdt #visual-studio #query-store Visual Studio SSDT Ignore Query Store Options in Publish P…

Bounty: 50

Is there any way in Visual Studio SSDT Publish Profile, to Totally Ignore Query Store Options Completely?

It keeps asking us to Rerun publish code below. I go to Database advanced settings, and turn it off. Even after its set off, it keeps asking to be in publish profile. Is there any way to Just Ignore Query Store in Publish profile, rather than trying to just turn it off?

I am looking through advanced options in publish profile.

enter image description here

BEGIN
    ALTER DATABASE [$(DatabaseName)]
        SET QUERY_STORE (QUERY_CAPTURE_MODE = NONE, CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 367)) 
        WITH ROLLBACK IMMEDIATE;
END


Get this bounty!!!