Why is some sql query much slower when used with SqlCommand?
Asked Answered
F

4

19

I have a stored procedure that executes much faster from Sql Server Management Studio (2 seconds) than when run with System.Data.SqlClient.SqlCommand (times out after 2 minutes).

What could be the reason for this?


Details: In Sql Server Management Studio this runs in 2 seconds (on production database):

EXEC sp_Stat
    @DepartmentID = NULL

In .NET/C# the following times out after 2 minutes (on production database):

string selectCommand = @"
EXEC sp_Stat
    @DepartmentID = NULL";
string connectionString = "server=***;database=***;user id=***;pwd=***";
using (SqlConnection connection = new SqlConnection(connectionString))
{
    using (SqlCommand command = new SqlCommand(selectCommand, connection))
    {
        connection.Open();
        using (SqlDataReader reader = command.ExecuteReader())
        {
            while (reader.Read())
            {
            }
        }
    }
}

I also tried with selectCommand = "sp_Stat", CommandType = StoredProcedure, and an SqlParameter, but it's the same result.

And without EXEC it's the same result as well.

On an almost data-empty development database both cases finishes in less than 1 second. So it's related to that there's a lot of data in the database, but it seems to only happen from .NET...


What Marc Gravell wrote about different SET values makes the difference in the presented case.

SQL Server Profiler showed that Sql Server Management Studio runs the following SET's that .NET Sql Client Data Provider does not:


SET ROWCOUNT 0 
SET TEXTSIZE 2147483647 
SET NOCOUNT OFF 
SET CONCAT_NULL_YIELDS_NULL ON 
SET ARITHABORT ON 
SET LOCK_TIMEOUT -1 
SET QUERY_GOVERNOR_COST_LIMIT 0 
SET DEADLOCK_PRIORITY NORMAL 
SET TRANSACTION ISOLATION LEVEL READ COMMITTED 
SET ANSI_NULLS ON 
SET ANSI_NULL_DFLT_ON ON 
SET ANSI_PADDING ON 
SET ANSI_WARNINGS ON 
SET CURSOR_CLOSE_ON_COMMIT OFF 
SET IMPLICIT_TRANSACTIONS OFF 
SET QUOTED_IDENTIFIER ON
SET NOEXEC, PARSEONLY, FMTONLY OFF

When I included these, the same query took the same amount of time in SSMS and .NET. And the responsible SET is ...

SET ARITHABORT ON

What have I learnt? Maybe to use a profiler instead of guessing...

(The solution at first seemed to be related to parameter sniffing. But I had mixed some things up...)

Fernferna answered 29/4, 2009 at 11:26 Comment(3)
One more thing to consider: if you have a performance penalty with ARITHABORT OFF you may be JOINing two tables by a string and a numeric field. In my case, adding a simple ISNUMERIC(string_field)=1 check before the join solved the problem :)Promethean
I would never have considered this - very very useful question/answer! :) We just have a raw SQL query being executed through Dapper and the same SET was the culprit.Wailoo
Be aware of setting ARITHABORT ON in application layer.. "adding SET ARITHABORT ON to your procedure is not a solution. It will seem to work when you try it. But that is only because you recreated the procedure which forced a new compilation" Full article here: sommarskog.se/query-plan-mysteries.htmlInnards
P
12

Another thing that can be important is the SET options that are enabled. Some of these options change the query plan sufficiently to change the profile. Some can have a huge impact if you are looking at (for example) a calculated + persisted (and possibly indexed) column: if the SET options aren't compatible, it can be forced to re-calculate the values, rather than using the indexed value - which can change an index seek into a table scan + calculation.

Try using the profiler to see what SET options are "in play", and see if using those options changes things.

Another impact is the connection string; for example, if you enable MARS that can change the behaviour in subtle ways.

Finally, transactions (implicit (TransactionScope) or explicit) can have a huge impact, depending on the isolation level.

Politician answered 29/4, 2009 at 11:56 Comment(7)
Thanks. I will look into how to examine the SET options. I guess it might be related to the difference, because otherwise the queries are identical...Fernferna
Hi Marc, I have been having this issue for years on a DB that im running, and the only solution has been to SET ARITHABORT ON/OFF on the stored proc. Unfortuntley it keeps on happening and i have to change the ARITHABORT option to the opposite of what it was. Do you know of any way to tell my .NET app to use the same SET options as SSMS will use?Galingale
Is it possible to configure ARITHABORT with Dapper, Marc? Is this something that is set at the connection level, rather than the command level? Is this a sql command I have to issue every time I establish a connection?Spirograph
@Spirograph that isn't a dapper concern - dapper merely wraps ADO.NET in a friendly face. If you need to do something extra on a connection, you'll still need to do that - although you could at least now use .conn.Execute(...). I feel your pain: we have similar requirements but with isolation level; it would be nice if we could set a connection-string default that is automatically restored whenever a connection is reset.Politician
Yeah, definitely not a Dapper concern, I get that. Looking for some type of global solution. We set it in the stored proc for now, which I don't consider a good solution. I read that it can be applied at the user level. If I set the option on the connection with Execute, do you know if those options will persist with .NETs connection pooling? Thanks for the feedback! Love Dapper.Spirograph
Be aware of setting ARITHABORT ON in application layer.. "adding SET ARITHABORT ON to your procedure is not a solution. It will seem to work when you try it. But that is only because you recreated the procedure which forced a new compilation" Full article here: sommarskog.se/query-plan-mysteries.htmlInnards
For me it was ARITHABORT OFF learn.microsoft.com/en-us/sql/t-sql/statements/…Breather
H
5

This is almost certainly due to an 'incorrect' cached query plan. This has come up on SO quite a few times.

Do you have up-to-date statistics? A regular scheduled index maintenance plan?

You can test if it is definitely due to the cached query plan by adding this to your stored procedure definition:

CREATE PROCEDURE usp_MyProcedure WITH RECOMPILE...

This will re-index an entire database (caution if database is very large!):

exec sp_msforeachtable "dbcc dbreindex('?')"

SO posts:

Big difference in execution time of stored proc between Managment Studio and TableAdapter.

Parameter Sniffing (or Spoofing) in SQL Server

optimize for unknown for SQL Server 2005?

Different Execution Plan for the same Stored Procedure

Heptagonal answered 29/4, 2009 at 11:35 Comment(5)
I ran the exact same queries (EXEC sp_Stat @DepartmentID = NULL) right after each other. First the one that took 2 seconds with SSMS. Then (a few seconds later) the one that timed out from .NET. Do you think it might still be related to an incorrectly cached query plan?Fernferna
either that or statistics are out of date, or indexes need to be rebuilt (which updates stats on the respective columns)Heptagonal
I use exact same parameters. And the WITH RECOMPILE option does not change the outcome...Fernferna
The articles you reference seem to talk about parameterized queries with an execution time that depends on the value of the parameters. I have the exact same values in these tests...Fernferna
Thanks. The solution was related to parameter sniffing. It helped to store the values of the sproc's input parameters in local variables...Fernferna
B
1

Had a similar issue and it turns out having MultipleActiveResultSets=true in the connection string (which is supposed to have minimal impact) was making pulling 1.5mil records over a remote connection take 25 minutes instead of around 2 minutes.

Bantu answered 28/7, 2016 at 3:43 Comment(3)
Amen, I just spent more than a day grinding on this. Query was fine and very simple, I looked into the SET options, etc. Finally played around with the connection string and found this little property was causing extreme performance degradation while iterating the result set (order of magnitude slower).Amice
For me this manifested itself mostly during a SQL bulk copy. I'd be curious if this was the same situation for others.Bantu
In my case the problem was noticed when iterating relatively large result sets (2k+ records). I would assume it affects any size result set, but the performance impact would not be noticed at all if you were only pulling a few records (e.g. 10ms turns to 100ms, hard to notice).Amice
C
0

We had a similiar issue, where a query would complete in 2 seconds in SSMS and take more than 90 seconds when called from a .NET client (we wrote several VB/C# apps/sites to test it.)

We suspected that the query plan would be different, and rewrote the query with explicit looping ("inner loop join" and "with index") hints. This solved the problem.

Calculation answered 29/4, 2009 at 11:30 Comment(2)
hope you examined your statistics and indexes first! ;)Heptagonal
Yeah, we have a weekly maintenance plan for statistics and indexes (runs every sunday.) We recreated the stored procedure to remove any cached query plans.Calculation

© 2022 - 2024 — McMap. All rights reserved.