Dapper fails calling a Stored Procedure that builds a Dynamic SQL statement

Summary

A .NET REST API using Dapper fails to execute a specific Azure SQL Stored Procedure that constructs a dynamic SQL string in a single statement. The Stored Procedure works correctly when called via SSMS, OLEDB, and an older Entity Framework-based API. Splitting the string concatenation for the dynamic SQL into multiple statements resolves the failure, despite the final SQL string being identical. The root cause is an interaction between the .NextResult() call, a hidden SET NOCOUNT ON/OFF toggle, and an intermediate “rows affected” message generated by the INSERT or UPDATE statements that build the @Result table variable prior to the dynamic SELECT.

Root Cause

The failure is caused by the “split” SQL (Version 1) emitting an intermediate “rows affected” message that the Dapper Query<T> method attempts to parse as a result set.

  • The Trigger: The INSERT INTO @Result ... statements that populate the temporary table in the Stored Procedure generate a message: ({rowcount} rows affected).

  • The Protocol: SQL Server sends this as a “Done” token, not a row stream.

  • The Dapper Behavior: Query<T> is designed to stream rows. It calls reader.NextResult() immediately after the first result set.

  • The Sequence of Events:

    1. Split SP (Version 2): SET @NCOMMAND = ... (1st statement). Implicit SET NOCOUNT OFF (or retained state). “Rows affected” message sent.

    2. SET @NCOMMAND = ... (2nd statement). “Rows affected” message sent.

    3. EXEC sp_executesql @NCOMMAND. Dapper sees the result of the dynamic SELECT.

    4. Dapper: Calls Read(). Gets rows. Calls NextResult(). Returns false (End of Stream). Success.

    5. Unsplit SP (Version 1): SET @NCOMMAND = @NCOMMAND + ... (All in one).

    6. Dapper: Calls Read(). It encounters the “Rows affected” token from the end of the building block, not the result set of the dynamic query.

    7. Dapper: Calls NextResult(). It sees another “Done” token (or simply no data).

    8. Dapper: Tries to map the “metadata” or “empty” result to the generic type T.

    9. Crash: If T is a scalar (int, string) or a primitive type (DateTime), or if the library is expecting a structured result, hitting a SqlDataReader state that reports HasRows = false or hits a Done packet immediately triggers an InvalidOperationException (e.g., “The reader is closed”) or a SqlTypeException (e.g., “Unexpected column type”).

The specific nuance is often that Dapper’s Query<T> handles the first result set. If the dynamic execution is considered a “subsequent” result (which happens in parameterized execution paths), the index alignment shifts.

Why This Happens in Real Systems

This is a classic “Protocol Mismatch” driven by Implicit Behavior.

  1. Implicit SET NOCOUNT ON/OFF: Developers often forget that SET NOCOUNT ON suppresses the (x rows affected) message. Without it, the SQL stream includes these messages.
  2. T-SQL Batching: The SQL EXEC sp_executesql is a batch. If the code before it generates a “Done” token, the client cursor (Dapper) advances to the “Next Result” expecting the sp_executesql result. If the “Done” token is treated as a result, Dapper reads an “empty” result set.
  3. Dapper’s Strict Expectations: Dapper Query<T> expects a tabular result immediately. It does not aggressively consume “Rows Affected” messages between batches unless specifically wrapped in a reader loop.
  4. The “Kicker” Explanation: Splitting the @NCOMMAND construction creates a literal “Batch” break. The intermediate “Rows Affected” messages are consumed before the final EXECUTE. Dapper initializes, sees the “noise” (consuming it, perhaps incorrectly or leaving the state clean), and then hits the actual SELECT in sp_executesql. The “Unsplit” version puts the “Rows Affected” and the sp_executesql result in close proximity, confusing the reader state machine.

Real-World Impact

  • Brittle Data Access: The application relies on undocumented string manipulation behavior in T-SQL to satisfy the specific quirks of the .NET Data Reader.
  • Maintenance Nightmare: Developers must maintain two versions of the same logic (split vs. unsplit) based on the specific Dapper version or SQL complexity.
  • Obscure Errors: The error “Invalid attempt to read when no rows are available” or “SqlDataReader is closed” provides zero context about the hidden “rows affected” messages.

Example or Code

Stored Procedure (Conceptual):

CREATE PROCEDURE dbo.GetDynamicData
    @FilterValue INT
AS
BEGIN
    -- NO SET NOCOUNT ON; -- This causes the problem in Dapper if Dapper doesn't handle it
    DECLARE @SQL NVARCHAR(MAX);
    DECLARE @Result TABLE (id INT, field1 NVARCHAR(50));

    -- Insert logic that populates @Result
    INSERT INTO @Result (id, field1)
    SELECT 1, 'Test';

    -- PROBLEMATIC (Unsplit):
    SET @SQL = N'SELECT * FROM @Result ORDER BY id';
    EXEC sp_executesql @SQL;

    -- WORKAROUND (Split):
    -- SET @SQL = N'SELECT * FROM @Result ';
    -- SET @SQL = @SQL + N'ORDER BY id';
    -- EXEC sp_executesql @SQL;
END

C# Dapper Call (Conceptual):

var parameters = new { FilterValue = 1 };
// This line fails on Version 1, works on Version 2
var results = connection.Query("dbo.GetDynamicData", parameters, commandType: CommandType.StoredProcedure);

How Senior Engineers Fix It

The senior engineer avoids modifying the SP (per the requirement) but directs the developer to fix the Client-Side Consumption or the SQL Protocol compliance.

  1. Explicitly Handle SET NOCOUNT:
    Tell the developer to ensure the Stored Procedure explicitly starts with SET NOCOUNT ON;. This removes the intermediate “rows affected” messages that confuse the Dapper reader.

  2. Use Execute or QueryMultiple:
    If the SP returns multiple result sets or “rows affected” messages are unavoidable, the developer should not use Query<T> (which assumes one clean result). They should use connection.QueryMultiple or wrap the call in a way that consumes the stream correctly.

  3. The “Raw Reader” Fix:
    If the developer cannot modify the SP, tell them to implement a raw ExecuteReader wrapper using SqlCommand. They must iterate through the reader:

    using (var reader = cmd.ExecuteReader())
    {
        // Skip "result sets" that have no columns (Rows Affected messages often manifest as zero-column results in some contexts)
        while (reader.Read() && reader.FieldCount == 0) 
        {
             // consume/no-op
        }
        // Now read the actual data
        if (reader.HasRows)
        {
             // Map manually
        }
    }

Why Juniors Miss It

  1. “It works in SSMS”: Juniors test strictly in SSMS where the “Rows Affected” is just a message in the messages pane, not part of the returned stream.
  2. “It’s valid SQL”: The T-SQL is syntactically correct. The junior assumes the database driver handles everything.
  3. Blind Trust in ORM: They treat Dapper/EF as a “black box” and assume it handles all SQL variations.
  4. Lack of Protocol Knowledge: They don’t understand that SQL Server sends “Done” tokens and “Column Metadata” tokens separately. They think ExecuteReader just sends a blob of SQL and gets a blob of rows back. They don’t realize that how the SQL is batched inside the SP affects the tokens sent over the wire.