poor performance with sqlparameter
Asked Answered
P

2

10

I have a web service, so the handler is called multiple times concurrently all the time.

Inside I create SqlConnection and SqlCommand. I have to execute about 7 different commands. Different commands require various parameters, so I just add them once:

command.Parameters.Add(new SqlParameter("@UserID", userID));
command.Parameters.Add(new SqlParameter("@AppID", appID));
command.Parameters.Add(new SqlParameter("@SID", SIDInt));
command.Parameters.Add(new SqlParameter("@Day", timestamp.Date));
command.Parameters.Add(new SqlParameter("@TS", timestamp));

Then during execution I just change CommandText prorerty and then call ExecuteNonQuery(); or ExecuteScalar();

And I face performance issue. For example little debuggin and profiling shows, that command

command.CommandText = "SELECT LastShowTS FROM LogForAllTime WHERE UserID = @UserID";

takes about 50ms in avarage. If I change it to:

command.CommandText = "SELECT LastShowTS FROM LogForAllTime WHERE UserID = '" + userID.Replace("\'", "") + "'";

then it takes only 1ms in avarage!

I just can't get a clue where to investigate the problem.

Peyote answered 21/12, 2011 at 12:52 Comment(6)
Can you state: what is the defined type of userID in the C#, and what is the defined type of the UserID column in the database?Lilalilac
Are you sure this is your performance bottleneck? Have you profiled everything?Fane
userID is a string, in DB it is varchar(20) and is a PKMyosin
Well, if I hardcode the statement from SSMS, profiler says it is only 1ms. Then I record timestamps from code right before ExecuteNonQuery and right after. If I use a direct string, it is 1ms, if I use param, it is 50ms. I've tried creating a sepearete command object with seperate param collection and the result is just the same.Myosin
@Алексей can you try explicitly configuring the parameter as varchar? A C# string is unicode, so (by default) maps to nvarchar, not varchar.Lilalilac
Yes, it works! Thank you so much. Can you write a seperate answer, so I can mark it as resolved?Myosin
L
16

That sounds like it has cached a query-plan for an atypical @UserID value (one of the early ones), and is reusing a poor plan for later queries. This isn't an issue in the second case since each has a separate plan. I suspect you just need to add:

OPTION (OPTIMIZE FOR UNKNOWN)

to the query, which will make it less keen to re-use plans blindly.


Alternative theory:

You might have a mismatch between the type of userID (in the C#) and the type of UserID (in the database). This could be as simple as unicode vs ANSI, or could be int vs varchar[n], etc. If in doubt, be very specific when configuring the parameter, to add it with the correct sub-type and size.

Clarification

Indeed, it looks like the problem here is the difference between a C# string (unicode) and the database which is varchar(n) (ANSI). The SqlParameter should therefore be explicitly added as such (DbType.AnsiString).

Lilalilac answered 21/12, 2011 at 12:56 Comment(8)
Would that really make a difference for such a simple WHERE clause?Britain
@Britain if the data is uneven, yes; imagine: the first query runs for a sample user with no (or very little) data - the query optimizer uses the statistics and decides on a query plan optimised for small numbers of rows. This then explodes hugely when faced with 200k rows. I've seen similar cases, absolutely. Only way to find out is to try it with and without the hint, though ;pLilalilac
Well, this is interesting feature I surely will investigate in more detail. Nice link is here blogs.msdn.com/b/sqlprogrammability/archive/2008/11/26/… So I've tried: SELECT LastShowTS FROM LogForAllTime WHERE UserID = @UserID option (OPTIMIZE FOR (@UserID UNKNOWN)) No effect...Myosin
@Алексей k; what about the second part ("Alternative thoery"), and the comment I added to check some details?Lilalilac
@MarcGravell: would the performance be better if stored procedure is used instead? ie., without using OPTION (OPTIMIZE FOR UNKNOWN)Golliner
May be moving to stored procedures will resolve the issue, but for now I want to resolve it within this approach. Point is that in theory and practice this scenario should work quite well.Myosin
@dotNETbeginner no, a sproc will behave identically in this scenarioLilalilac
An excellent explanation of the string versus varchar performance issue can be found here: codeproject.com/Articles/1039284/… : "So, rather than converting our single NVarChar parameter to VarChar, and comparing it against the indexed [...] column, SQL Server is forced to convert the entire VarChar [...] column to NVarChar in order to do the comparison. This results in the expensive Table Scan operation seen in the execution plan, which accounted for 98.59% of the query’s cost."Falito
B
0

You're sending seven times more data to the server, so it will be slower.

Also, if your userID strings have different lengths, setting an explicit length in the SQL parameter will allow it to reuse the query better.

Britain answered 21/12, 2011 at 12:56 Comment(6)
That does not compute; this would only apply if the parameters were huge and bandwidth issues are the bottleneck; I very much doubt this is the case here. The time to send a few basic parameters is trivial, and vastly outweighed by latency (rather than bandwidth) in most cases.Lilalilac
@MarcGravell: Are you addressing my first paragraph or my second?Britain
First line - I would not expect that to account for a 50ms jump. Maybe a 0.1ms deviation (assuming no huge values).Lilalilac
I've tried to seperate parameter collection for each command - no significant change for me.Myosin
Yes, you have a point. (especially since his names imply that they aren't strings)Britain
Nice trick (for setting explicit length) - thank you! But it just can't resolve my 50ms gap.Myosin

© 2022 - 2024 — McMap. All rights reserved.