#StackBounty: #sql-server #replication #sql-server-2016 #transactional-replication Do long-running and locking queries on subscriber da…

Bounty: 50

One of our servers has a subscriber database. I’ve not used replication before and cannot find information on how the mechanism works.
Does the publisher have to wait on the replication completing before transactions are committed or does it make use of a queue or the transaction log before applying changes to the subscriber?
The subscriber is used as the back-end for a custom report builder (amongst other uses) which cannot make good use of indexes so tables are frequently locked for several minutes. I am concerned for any effects this might have on our publisher, which is on our live production server.
Both publisher and subscriber are in Full Recovery.


Get this bounty!!!

#StackBounty: #sql-server #index #sql-server-2017 #storage Attempts to reclaim unused space causes the used space to increase significa…

Bounty: 50

I have a table in a production database that has a size of 525 GB, of which 383 GB is unused:

Unused Space

I’d like to reclaim some of this space, but, before messing with the production DB, I’m testing some strategies on an identical table in a test DB with less data. This table has a similar problem:

Unused Space

Some information about the table:

  • The fill factor is set to 0
  • There are about 30 columns
  • One of the columns is a LOB of type image, and it’s storing files that range in size from a few KB to several hundred MB

The Server is running SQL Server 2017 (RTM-GDR) (KB4505224) – 14.0.2027.2 (X64).

Some things I’ve tried:

  • Rebuilding the indexes: ALTER INDEX ALL ON dbo.MyTable REBUILD. This had a negligible impact.
  • Reorganizing the indexes: ALTER INDEX ALL ON dbo.MyTable REORGANIZE WITH(LOB_COMPACTION = ON). This had a negligible impact.
  • Copied the LOB column to another table, dropped the column, re-created the column, and copied the data back (as outlined in this post: Freeing Unused Space SQL Server Table). This decreased the unused space, but it seemed to just convert it into used space:

    Unused Space

  • Used the bcp utility to export the table, truncate it, and reload it (as outlined in this post: How to free the unused space for a table). This also reduced the unused space and increased the used space to a similar extent as the above image.

  • Even though it’s not recommended, I tried the DBCC SHRINKFILE and DBCC SHRINKDATABASE commands, but they didn’t have any impact on the unused space.
  • Running DBCC CLEANTABLE('myDB', 'dbo.myTable') didn’t make a difference
  • I’ve tried all of the above both while maintaining the image and text datatypes and after changing the datatypes to varbinary(max) and varchar(max).
  • I tried importing the data into a new table in a fresh database, and this also only converted the unused space into used space. I outlined the details of this attempt in this post.

I don’t want to make these attempts on the production DB if these are the results I can expect, so:

  1. Why is the unused space just being converted to used space after some of these attempts? I feel like I don’t have a good understanding of what’s happening under the hood.
  2. Is there anything else I can do to decrease the unused space without increasing the used space?

EDIT: Here’s the Disk Usage report and script for the table:

Disk Usage

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[MyTable](
    [Column1]  [int] NOT NULL,
    [Column2]  [int] NOT NULL,
    [Column3]  [int] NOT NULL,
    [Column4]  [bit] NOT NULL,
    [Column5]  [tinyint] NOT NULL,
    [Column6]  [datetime] NULL,
    [Column7]  [int] NOT NULL,
    [Column8]  [varchar](100) NULL,
    [Column9]  [varchar](256) NULL,
    [Column10] [int] NULL,
    [Column11] [image] NULL,
    [Column12] [text] NULL,
    [Column13] [varchar](100) NULL,
    [Column14] [varchar](6) NULL,
    [Column15] [int] NOT NULL,
    [Column16] [bit] NOT NULL,
    [Column17] [datetime] NULL,
    [Column18] [varchar](50) NULL,
    [Column19] [varchar](50) NULL,
    [Column20] [varchar](60) NULL,
    [Column21] [varchar](20) NULL,
    [Column22] [varchar](120) NULL,
    [Column23] [varchar](4) NULL,
    [Column24] [varchar](75) NULL,
    [Column25] [char](1) NULL,
    [Column26] [varchar](50) NULL,
    [Column27] [varchar](128) NULL,
    [Column28] [varchar](50) NULL,
    [Column29] [int] NULL,
    [Column30] [text] NULL,
 CONSTRAINT [PK] PRIMARY KEY CLUSTERED 
(
    [Column1] ASC,
    [Column2] ASC,
    [Column3] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[MyTable] ADD  CONSTRAINT [DF_Column4]  DEFAULT (0) FOR [Column4]
GO
ALTER TABLE [dbo].[MyTable] ADD  CONSTRAINT [DF_Column5]  DEFAULT (0) FOR [Column5]
GO
ALTER TABLE [dbo].[MyTable] ADD  CONSTRAINT [DF_Column15]  DEFAULT (0) FOR [Column15]
GO
ALTER TABLE [dbo].[MyTable] ADD  CONSTRAINT [DF_Column16]  DEFAULT (0) FOR [Column16]
GO


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2014 #query-performance #optimization #full-text-search Can oscillating number of fragment decrea…

Bounty: 100

I’ve got a FTS Catalog on a table containing ~106 row. It used to work like a charm, but lately, for an unknown reason it started to get very bad (query >30 sec) performances randomly.

Reading Guidelines for full-text index maintenance I dig around sys.fulltext_index_fragments and noticed the following:

  1. The number of fragment oscillate between 2 and 20 at high frequency
    (multiple times per minutes);
  2. The biggest fragment contains ~105 rows and is 11Mb;
  3. The others contains from 1 to 40 rows.

Can this oscillating number of fragment mess up SQL Server’s execution plan selection?

What can I do to clean up this?


Get this bounty!!!

#StackBounty: #sql-server #ssms #ssas #data-warehouse #cube SSAS Internal error: An Unexpected error occurred (file 'pfcre.cpp'…

Bounty: 50

Has anyone seen this error before? How did you fix it? I can’t find anything through Google.

Here is what I have done . . .

1) I tried doing a google search but practically nothing came up.
2) I checked all of my permissions everywhere, and from what I can tell, that is not the problem.
3) Building and deploying does not generate errors.
4) I only have errors when I try to process my cube.


Get this bounty!!!

#StackBounty: #security #sql-server #t-sql #sql-injection Safe dynamic SQL for generic search

Bounty: 100

Prompted by discussion about SQL injection, I wanted to put a proof of concept forward to get feedback about whether this is in fact safe and protected against SQL injection or other malicious use. For good reference on the subject of constructing a dynamic search with dynamic SQL, I’d probably look there.

This is meant to be a proof of concept, not a complete working solution to illustrate how we can accept text input from users but handle it as if it were parameterized properly.

The assumptions are as follows:

1) We don’t want to run code client-side — in theory, this could have been done in a middle tier as some kind of API. However, even if there were a middle tier API endpoint, it does no good if it does not properly parameterize the query it makes on users’ behalf. Furthermore, having it in SQL means that it is now generic to any clients who may need the functionality but at expense of poor portability. This will likely work only on Microsoft SQL Server, not on other database vendors, at least not without significant modifications.

NOTE: Though the code below has been tested with SQL Server 2012, in theory, it should be compatible for 2008 R2 and higher. We can assume that any answer that will work on 2008 R2 is acceptable; if an answer relies on some features introduced in the later version, it will be also considered, especially if it improves the security.

2) Under no circumstances should the users be allowed to write the dynamic SQL, whether directly or indirectly. The only thing that should write the dynamic SQL is our code, without any user’s inputs. That means taking additional indirection to ensure that users’ input cannot become a part of the dynamic SQL being assembled.

3) We assume that the users only need to search a single table, wants all columns, but may want to filter on any columns. This is only to simplify the proof of the concept – there is no technical reason why it can’t do more, provided that the practices outlined in the proof of concept is rigorously followed.

4) Because dynamic SQL is ultimately executed, we have to require that the users has at least the SELECT permission on the table they want to search.

Helper Function for data types

We first need a function to help us with building a formatted data type because the sys.types doesn’t present the information in most friendly manner for writing a parameter. While this could be more sophisticated, this suffices for most common cases:

CREATE OR ALTER FUNCTION dbo.ufnGetFormattedDataType (
    @DataTypeName sysname,
    @Precision int,
    @Scale int,
    @MaxLength int
) RETURNS nvarchar(255) 
WITH SCHEMABINDING AS
BEGIN
    DECLARE @Suffix nvarchar(15);

    SET @Suffix = CASE 
        WHEN @DataTypeName IN (N'nvarchar', N'nchar', N'varchar', N'char', N'varbinary', N'binary')
        THEN CONCAT(N'(', IIF(@MaxLength = -1, N'MAX', CAST(@MaxLength AS nvarchar(12))), ')')

        WHEN @DataTypeName IN (N'decimal', N'numeric')
        THEN CONCAT(N'(', @Precision, N', ', @Scale, N')')

        WHEN @DataTypeName IN (N'datetime2', N'datetimeoffset', N'time')
        THEN CONCAT(N'(', @Scale, N')')

        ELSE N''
    END;

    RETURN CONCAT(@DataTypeName, @Suffix);
END;

Main dynamic search procedure

With the function, we can then build our main procedure for creating the dynamic SQL to support generic search:

CREATE OR ALTER PROCEDURE dbo.uspDynamicSearch (
    @SchemaName sysname,
    @TableName sysname,
    @ParameterXml xml 
) AS
BEGIN
    DECLARE @stableName sysname,
            @stableId int,
            @err nvarchar(4000)
    ;

    SELECT  
        @stableName = o.Name,
        @stableId = o.object_id
    FROM sys.objects AS o
    WHERE o.name = @TableName
      AND OBJECT_SCHEMA_NAME(o.object_id) = @SchemaName;

    IF @stableName IS NULL
    OR @stableId IS NULL
    BEGIN
        SET @err = N'Invalid schema or table name specified.';
        THROW 50000, @err, 1;
        RETURN -1;
    END;

    SELECT
        x.value(N'@Name', N'sysname') AS ParameterName,
        x.value(N'@Value', N'nvarchar(MAX)') AS ParameterValue
    INTO #RawData
    FROM @ParameterXml.nodes(N'/l/p') AS t(x);

    IF EXISTS (
        SELECT NULL
        FROM #RawData AS d
        WHERE NOT EXISTS (
            SELECT NULL
            FROM sys.columns AS c
            WHERE c.object_id = @stableId
              AND c.name = d.ParameterName
        )
    )
    BEGIN
        SET @err = N'Invalid column name(s) specified.';
        THROW 50000, @err, 1;
        RETURN -1;
    END;

    SELECT
        ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS Id,
        c.name AS ColumnName,
        d.ParameterValue AS ParameterValue,
        c.user_type_id AS DataTypeId,
        t.name AS DataTypeName,
        c.max_length AS MaxLength,
        c.precision AS Precision,
        c.scale AS Scale,
        dbo.ufnGetFormattedDataType(t.name, c.precision, c.scale, c.max_length) AS ParameterDataType
    INTO #ParameterData
    FROM #RawData AS d
    INNER JOIN sys.columns AS c
        ON d.ParameterName = c.name
    INNER JOIN sys.types AS t
        ON c.user_type_id = t.user_type_id
    WHERE c.object_id = @stableId;

    DECLARE @Sql nvarchar(MAX) = CONCAT(N'SELECT * FROM ', QUOTENAME(OBJECT_SCHEMA_NAME(@stableId)), N'.', QUOTENAME(@stableName));

    IF EXISTS (
        SELECT NULL
        FROM #ParameterData
    )
    BEGIN
        DECLARE @And nvarchar(5) = N' AND ';

        SET @Sql += CONCAT(N' WHERE ', STUFF((
            SELECT 
                CONCAT(@And, QUOTENAME(d.ColumnName), N' = @P', d.Id)
            FROM #ParameterData AS d
            FOR XML PATH(N'')
        ), 1, LEN(@And), N''));

        DECLARE @Params nvarchar(MAX) = CONCAT(N'DECLARE ', STUFF((
            SELECT
                CONCAT(N', @P', d.Id, N' ', d.ParameterDataType, N' = ( SELECT CAST(d.ParameterValue AS ', d.ParameterDataType, N') FROM #ParameterData AS d WHERE d.Id = ', d.Id, N') ')
            FROM #ParameterData AS d
            FOR XML PATH(N'')
        ), 1, 2, N''), N';');

        SET @Sql = CONCAT(@Params, @Sql);
    END;

    SET @Sql += N';';
    EXEC sys.sp_executesql @Sql;
END;

Analysis

Let’s go over the procedures in parts to elaborate the reasoning behind the design, starting with the parameters.

@SchemaName sysname,
@TableName sysname,
@ParameterXml xml 

The schema & table names are self-evident but we require that the users provide their search conditions as a XML document. It doesn’t have to be a XML document; JSON would work as well (provided that you’re using a recent version of SQL Server). The point is that it must be a well-defined format with native support for parsing the contents. A sample XML may look something like this:

<l>
  <p Name="First Name" Value="Martin" />
  <p Name="Last Name" Value="O’Donnell" />
</l>

The XML is basically a (l)ist of the (p)arameters in name-value pairs.

We have to validate both parameters. First is easily done:

SELECT  
    @stableName = o.Name,
    @stableId = o.object_id
FROM sys.objects AS o
WHERE o.name = @TableName
  AND OBJECT_SCHEMA_NAME(o.object_id) = @SchemaName;

Because we do not want users’ inputs to go directly into the dynamic SQL, we use a separate variable, @stableName which would be same value as the @TableName but only if the user isn’t malicious and tried to sneak in extra characters. Since we filter it through the sys.objects, that implicitly enforces SQL Server’s identifier rules and thus validate that the input is valid.

For the parameters, we need some more work, so we need to load into a temporary table. We can’t trust the user inputs so we must treat it accordingly.

SELECT
    x.value(N'@Name', N'sysname') AS ParameterName,
    x.value(N'@Value', N'nvarchar(MAX)') AS ParameterValue
INTO #RawData
FROM @ParameterXml.nodes(N'/l/p') AS t(x);

Using a temporary table allow us to materialize the contents of XML in a relational table since we refer to it twice later on.

We need to validate all the column names in the same manner we did with the table name. Since it’s a set, we’ll use EXISTS to help us out.

IF EXISTS (
    SELECT NULL
    FROM #RawData AS d
    WHERE NOT EXISTS (
        SELECT NULL
        FROM sys.columns AS c
        WHERE c.object_id = @stableId
          AND c.name = d.ParameterName
    )
)
BEGIN
    SET @err = N'Invalid column name(s) specified.';
    THROW 50000, @err, 1;
    RETURN -1;
END;

In addition to validating the column name, we also verify it’s in the same table we are going to query.

SELECT
    ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS Id,
    c.name AS ColumnName,
    d.ParameterValue AS ParameterValue,
    c.user_type_id AS DataTypeId,
    t.name AS DataTypeName,
    c.max_length AS MaxLength,
    c.precision AS Precision,
    c.scale AS Scale,
    dbo.ufnGetFormattedDataType(t.name, c.precision, c.scale, c.max_length) AS ParameterDataType
INTO #ParameterData
FROM #RawData AS d
INNER JOIN sys.columns AS c
    ON d.ParameterName = c.name
INNER JOIN sys.types AS t
    ON c.user_type_id = t.user_type_id
WHERE c.object_id = @stableId;

In addition to validating the column names we want to use for filters, we collect metadata from the sys.columns and sys.types. Note that the XML itself can’t be used to tell us what the data types the user wants to use. That would be a vector for malicious attack so we must rely on the information from the catalog views, accepting only the parameter values directly from the XML supplied by the user.

At this point, we’ve validated and collected all the metadata about the columns but we still can’t trust the contents of the ParameterValue.

Note the ROW_NUMBER() generating the IDs of the parameters. That is important as will be seen later on.

DECLARE @Sql nvarchar(MAX) = CONCAT(N'SELECT * FROM ', QUOTENAME(OBJECT_SCHEMA_NAME(@stableId)), N'.', QUOTENAME(@stableName));

We build our first part of the dynamic SQL. We assume that it’s OK to allow the users to select entire table, though that might be dickish if there’s lot of records. In a complete solution, it might be more prudent to have a TOP 100 or something like that. The problem is that a TOP N usually doesn’t make sense without an ORDER BY so the same complete solution should probably have some support for allowing users to specify a sort order to ensure consistent results, even if it’s something lame like sorting by the identity column.

Going forward, We’ll assume that we have a set of parameters that we need to filter on.

SET @Sql += CONCAT(N' WHERE ', STUFF((
    SELECT 
        CONCAT(@And, QUOTENAME(d.ColumnName), N' = @P', d.Id)
    FROM #ParameterData AS d
    FOR XML PATH(N'')
), 1, LEN(@And), N''));

Here, we abuse the FOR XML PATH to provide a concatenation of the filter predicate for the WHERE clause. Using the sample XML above, the output would have been something like WHERE [First Name] = @P1 AND [Last Name] = @P2. Note the horrid naming of columns, with spaces in it, to show the value of QUOTENAME to ensure that even in a crappy database schema, we can avoid getting an error with an iffy identifier.

Note that we also assume all filters in XML are AND‘d together. In a more complex implementation, users might want the option to OR or mix AND and OR, either which could be provided via an XML attribute.

DECLARE @Params nvarchar(MAX) = CONCAT(N'DECLARE ', STUFF((
    SELECT
        CONCAT(N', @P', d.Id, N' ', d.ParameterDataType, N' = ( SELECT CAST(d.ParameterValue AS ', d.ParameterDataType, N') FROM #ParameterData AS d WHERE d.Id = ', d.Id, N') ')
    FROM #ParameterData AS d
    FOR XML PATH(N'')
), 1, 2, N''), N';');

This is the closest the user’s input can get to our dynamic SQL. We would read from the same temporary table we created and assign to a parameter we create ourselves, with a CAST. Note that we could have used a TRY_CAST to avoid a runtime error but I would argue that an error needs to occur if the users put in bad input. In a complete solution, the procedure could be wrapped in a TRY/CATCH block to sanitize the error message somehow.

Again using the XML example from above, it’d come out something like this (formatted for readability):

DECLARE @P1 varchar(100) = (
  SELECT CAST(d.ParameterValue AS varchar(100)) 
  FROM #ParameterData AS d WHERE d.Id = 1
);

Note that we did not even use the name that the users provided to us; we used a numeric ID which was concatenated by our own code. Furthermore, the code reads from the temporary table and CAST it into the parameter we want it to be. That makes it easier for us to handle different data types for various parameters the users may send to us but without actually concatenating the values they provide to us to our dynamic SQL.

Once we have that, we concatenate the assignments to the @Sql and execute it:

EXEC sys.sp_executesql @Sql;

Note that we didn’t use the @params parameter of the sp_executesql — there’s no parameters we can really pass in since the parameters are in a temporary table, which was why we used assignments inside the dynamic SQL to move the user’s input from a XML document to a parameter within the dynamic SQL.

Sample calling code

--Returns one match
EXEC dbo.uspDynamicSearch 
    N'dbo',
    N'Customers', 
    N'<l><p Name="First Name" Value="Martin" /><p Name ="Last Name" Value="O’Donnell" /></l>';

-- Returns an empty result set
EXEC dbo.uspDynamicSearch 
    N'dbo',
    N'Customers', 
    N'<l><p Name="First Name" Value="Martin''; DROP TABLE Customers; --" /><p Name ="Last Name" Value="O’Donnell" /></l>';

--Returns an error; invalid table name
EXEC dbo.uspDynamicSearch 
    N'dbo',
    N'Customers''; DROP TABLE Customers; --', 
    N'<l><p Name="First Name" Value="Martin" /><p Name ="Last Name" Value="O’Donnell" /></l>';

--Returns an error; invalid column name
EXEC dbo.uspDynamicSearch 
    N'dbo',
    N'Customers', 
    N'<l><p Name="First Name''; DROP TABLE Customers; --" Value="Martin" /><p Name ="Last Name" Value="O’Donnell" /></l>';

Here’s a sample of the complete dynamic SQL assembled by the code, formatted for readability:

DECLARE @P1 nvarchar(100) = ( 
  SELECT CAST(d.ParameterValue AS nvarchar(100)) 
  FROM #ParameterData AS d WHERE d.Id = 1
), @P2 nvarchar(100) = ( 
  SELECT CAST(d.ParameterValue AS nvarchar(100)) 
  FROM #ParameterData AS d WHERE d.Id = 2
);

SELECT * 
FROM [dbo].[Customers]
WHERE [First Name] = @P1 AND [Last Name] = @P2;

Can this be circumvented?

As mentioned, the discussion about SQL injection made me wonder if I may have missed something or made an assumption where the malicious user could still manage to circumvent the layers of indirection I’ve put in and inject their nasty little SQL?


Get this bounty!!!

#StackBounty: #c# #asp.net #sql-server #sql-server-2012 #sqlcachedependency Simple SqlCacheDependency

Bounty: 50

Almost every tutorial I have read seems to incorrectly setup SqlCacheDependency. I believe they normally mix up the outdated polling method with the query notification method.

Here are two of many examples:

https://www.codeproject.com/Tips/787483/Web-Caching-with-SqlCacheDependency-Simplified (non-microsoft)

https://docs.microsoft.com/en-us/dotnet/api/system.web.caching.sqlcachedependency?view=netframework-4.8 (Microsoft)


Based on my testing, if you are using the broker (MSSQL 2015+) you don’t need to make any .config changes nor do you need to make any SqlCacheDependencyAdmin calls (Don’t need to define tables, etc).

I simplify just do this…

SqlDependency.Start(connString)
...
queryString = "SELECT ...";
cacheName = "SqlCache" + queryString.GetHashCode();
...
using (var connection = new SqlConnection(connString))
{
    connection.Open();
    var cmd = new SqlCommand(queryString, connection)
    {
        Notification = null, 
        NotificationAutoEnlist = true
    };

    var dependency = new SqlCacheDependency(cmd);

    SqlDataReader reader = cmd.ExecuteReader();
    try
    {
        while (reader.Read())
        {
            // Set the result you want to cache
            data = ...
        }
    }
    finally
    {
        reader.Close();
    }

    HostingEnvironment.Cache.Insert(cacheName, data, dependency);
}

(The code that checks if the cache is null or not is not included, as thats all just setup. I just want to show the setting of the cache)

This seems to work without the need to define which tables are involved in the query and make complicated triggers on each table. It just works.

More surprising to me is that the rules for making a query have notification : https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms181122(v=sql.105) (Can’t find documentation newer than 2008) don’t seem to apply. I purpose do a TOP in my SQL and it still works.

For a test I have it run a query 1000 times involving a table named “Settings”. Then I update a value in the table and repeat the query.

I watch the Profiler for any queries involving the word “Settings” and I see the query is executed just 1 time (to set the cache) and then the update statement occurs, and then the query is re-executed one more time (the cache was invalidated and the query ran again)

I am worried that in my 2-3 hours of struggling with the proper way to do this I am missing something and it really is this simple?

Can I really just put any query I want and it’ll just work? I am looking for any pointers where I am doing something dangerous/non-standard or any small print that I am missing


Get this bounty!!!

#StackBounty: #sql-server #sql-server-2014 #kerberos #spn The SQL Server Network Interface library could not deregister the Service Pri…

Bounty: 100

I’ve set up a SQL Server service account with permissions to read and write service principal names. When SQL Server starts up I get the expected message in the logs showing that the service account has successfully registered the SPN:

The SQL Server Network Interface library successfully registered the
Service Principal Name (SPN) [MySPN] for the SQL Server service.

Connections to the database server use Kerberos authentication as expected and all seems well.

However, when I shut down SQL Server a message is entered in the logs showing that the SPN could not be deregistered:

The SQL Server Network Interface library could not deregister the
Service Principal Name (SPN) [MySPN] for the SQL Server service.
Error: 0x6d3, state: 4. Administrator should deregister this SPN
manually to avoid client authentication errors.

I’ve checked that there are no duplicate SPNs and checked that the SPN is registered to the correct service account, and only to that account. The server has been rebooted several times. Microsoft’s Kerberos Config Manager doesn’t offer any insight.

I don’t understand why the service account would be permitted to create the SPN but not permitted to delete it.


Get this bounty!!!

#StackBounty: #javascript #php #jquery #sql-server #ajax Update SQL Query with populated variables from AJAX functions over multiple PH…

Bounty: 50

i try to get help with that question.

All in all Q: It doesnt Update my DB entry like this Step by Step Order how i think it could be done.

its a bit difficult to explain, but i try to explain it step by step with minimal and readable Code. I use the original code, its hard to convert it in reproducible Examples.

A.1 Page ma_aktuelle_ReadOut.php
There is a php Part

 <?php echo "<a href='ma_Testende.php?TestergebnisID=&TestaufstellungID=". $row['TestaufstellungID']."&TesterID=".$row['TesterID']."' title='Test stoppen' data-toggle='tooltip' class='stoppen'>   <span class='glyphicon glyphicon-stop'></span></a>";
?>

When i click this link the following javascript function is called and ask me “really stop?”


$(document).ready(function(){
  $("a.stoppen").click(function(e){
   if(!confirm('Wirklich stoppen?')){
    e.preventDefault();
    $('.alert').show()
    return false;
    }
    return true;
            });
        });

<style>
 .alert {
  display: none;
    }
</style>

When i cklick “yes” it opens the second Page.

A 2 Page ma_Testende.php
On this Page are 2 AJAX JS Functions.
The first Ajax is asking for “Datum” via type:get from the following next page and wait till success (see Page B 3):

https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js

B 3 Page ma_get-TesterID_Testende.php

<?php
$cinfo = array(
    "Database" => $database,
    "UID" => $username,
    "PWD" => $password
);
$conn = sqlsrv_connect($server, $cinfo);
                $sqlreadZeit = "Select TOP 1 CID,Datum from DB.dbo.TesterCycleCount where TesterID = '".$_GET['TesterID']."' order by Datum DESC";
                $result1 = sqlsrv_query($conn, $sqlreadZeit);
                $zeiten_arr = array();
                while ($row = sqlsrv_fetch_array($result1, SQLSRV_FETCH_ASSOC)) {
                $CID = $row['CID'];
                $Datum = $row['Datum']->format('d.m.Y h:m:s');
                $zeiten_arr[] = array("CID" => $CID, "Datum" => $Datum);
                                }
    header('Content-type: application/json');
  echo json_encode($zeiten_arr); 
?>

Back with the “Datum” the second AJAX is called (see Page A 2)
With the “Datum” and “TestaufstellungID” as variable it should be call the next Page and Update the DB entry with the populated variablles.

B. 4 Page ma_TestendeSQL.php

<?php
$cinfo = array(
    "Database" => $database,
    "UID" => $username,
    "PWD" => $password
);
$conn = sqlsrv_connect($server, $cinfo);

$TestaufstellungID = $_GET['TestaufstellungID'];
$Testende= $_GET['Datum'];
$Testdatum = date('Y-d-m');

$stop = $connection->prepare("WITH UpdateTestende AS (
  SELECT TOP 1  * from DB.dbo.Testergebnisse 
  WHERE TestaufstellungID = :TestaufstellungID
  ORDER BY TestergebnisID DESC 
)
update UpdateTestende 
set Testende = :Testende,
Datum = :Testdatum");
$stop->execute(array(':TestaufstellungID' => $TestaufstellungID, ':Testdatum' => $Testdatum, ':Testende' => $Testende));

    header('Content-type: application/json');
?>

The php variable $Testende get the populated “Datum” from the Ajax functions. All in all at the end it should be Update, when i click on the link the ( Page A 1) my DB entry with the populated “Datum” which i get from the first Ajax call ( Page A 2 ) from the SQL Query ( Page B 3) back to the second AJAX Call ( Page A 2 ) than with the data: {TestaufstellungID:TestaufstellungID, Datum: Datum} to the last page ( Page B 4)

But it doesnt Update my DB entry like this Step by Step Order how i think it could be done.

Encapsulated is the SQL-Code working fine. With the Code header('Content-type: application/json'); the browser tell me the following when i click on the link from ( Page A 1 )

SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data

Thats why i posted all the Step i think on one point the variables are not passed right to the next page or they are empty becasue the code is not executed in the right order Server/Client PHP/JS or Asynchronous problem…
The console.log tell me nothing. At the moment i have no idea where to start with the debugging?

Hope someone can help me. thx

Edit: iam pretty sure the ajax call is empty, but i dont see it in which step the values getting empty

Edit2:
AJAX Call is empty or is not starting.
Further invstigation: The Ajax alert me the error part with empty exception and dont alert me the success part. So it doesnt go to the page ma_get-TesterID_Testende.php or it doesnt return back the Datum .
Could be not enabled Cross-Site-Scripting be the Problem?

But in another Page is a similiar Ajax Call working fine.

$(document).ready(function(){

var TesterID = "<?php echo $_GET['TesterID']; ?>"; /* value der Tester erhalten */ 

        $.ajax({ /* AJAX aufrufen */
            url: 'ma_get-TesterID.php',
            type: 'get', /* Methode zum übertragen der Daten */
            data: {TesterID:TesterID}, /* Daten zu übermitteln */
            dataType: 'json',
            success:function(response){ /* Die zurückgegebenene Daten erhalten */

                var len = response.length;

                $("#Teststart").empty(); /* Die erhaltenden Daten werden bei der ID angezeigt */
                for( var i = 0; i<len; i++){
                    var CID = response[i]['CID'];
                    var Datum = response[i]['Datum'];

                    $("#Teststart").append("<option value='"+Datum+"'>"+Datum+"</option>");

                }
            }
        });


    $("#TesterID").change(function(){ /* Wenn du änderst und vom Select Feld auswählst */
        var TesterID = $(this).val(); /* value der Tester erhalten */ 

        $.ajax({ /* AJAX aufrufen */
            url: 'ma_get-TesterID.php',
            type: 'get', /* Methode zum übertragen der Daten */
            data: {TesterID:TesterID}, /* Daten zu übermitteln */
            dataType: 'json',
            success:function(response){ /* Die zurückgegebenene Daten erhalten */

                var len = response.length;

                $("#Teststart").empty(); /* Die erhaltenden Daten werden bei der ID angezeigt */
                for( var i = 0; i<len; i++){
                    var CID = response[i]['CID'];
                    var Datum = response[i]['Datum'];

                    $("#Teststart").append("<option value='"+Datum+"'>"+Datum+"</option>");

                }
            }
        });
    });

});

In this example the Ajax Call starts when i change the value from a Dropdown selection Form. Is there a difference?

How this Ajax should work i try to explain in my other question step by step, how it my application should be execute.

Update SQL Query with populated variables from AJAX functions over multiple PHP Pages

Edit 3:
JQuery Version:
https://code.jquery.com/jquery-3.4.1.js


Get this bounty!!!

#StackBounty: #sql-server #transaction-log #log-shipping Differences Between Setting Up Transaction Log Shipping through Wizard vs Scri…

Bounty: 100

There appears to be a fairly significant difference after setting up transaction log shipping (in Standby/ Read-Only mode) through the SSMS wizard, which generates jobs that then call on outside .exe to handle the backup and restore vs doing the log shipping through a hand-written script:
Backup:

BACKUP LOG [DBName]        
TO DISK = 'Z:DBName_TimeStamp.trn'           
WITH NAME = N'DBName_Log', INIT, FORMAT;

Restore:

EXEC [LSServer].[master].sys.sp_executesql N'RESTORE LOG [DBName] FROM DISK = N''B:DBName_TimeStamp.trn'' 
WITH STANDBY = N''B:DBName.standby'';';

When utilizing the log shipping through the wizard the database seems to be ready immediately and does not experience any lag time the first time any stored procedure is executed/ query run. However, when I set up the backup and restore through my script above the first time a stored procedure is executed against the recently restored database it takes 10x or more time to complete (I originally asked about this issue here: After Restoring Log Shipping to Secondary Server, First Stored Procedure Execution is Slow ).

I’ve eliminated all possibilities of different disks/ hardware etc. causing the problem as the issue is now affecting the same instance/ database that was originally using the log shipping set up by the wizard and has begun to experience the same slow initial execution/ query after restore.
The .trn file sizes look the same which ever method I use.

What can be done to identify/ eliminate this difference without going through the SSMS wizard?

Some additional information included in the linked question:

The only difference being the primary server executes with 1.4 seconds of Wait time on server replies and the secondary takes 81.3 seconds.

I do see a large number of PAGEIOLATCH_SH locks from the first execution, as you predicted:

diff after first exec vs diff after second exec
waiting_tasks_count    10903    918  
wait_time_ms          411129  12768  


Get this bounty!!!