How can I solve a connection pool problem between ASP.NET and SQL Server?
Asked Answered
S

30

288

The last few days we see this error message in our website too much:

"Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached."

We have not changed anything in our code in a while. I revised the code to check open connections which didn't close, but found everything to be fine.

  • How can I solve this?

  • Do I need to edit this pool?

  • How can I edit this pool's max number of connections?

  • What is the recommended value for a high traffic website?


Update:

Do I need to edit something in IIS?

Update:

I found that the number of active connections are anywhere from 15 to 31, and I found that the max allowed number of connections configured in SQL server is more than 3200 connections, is 31 too many or should I edit something in the ASP.NET configration?

Scavenger answered 22/3, 2009 at 9:50 Comment(6)
The Max Pool Size default is 100 if I correctly remember. Most Web sites do not use more than 50 connections under heavy load - depends on how long your queries take to complete. Short term fix in Connection String: try to set a higher value in your connection strings: "Max Pool Size=..."Spectacled
how much? make it 200 for example?Scavenger
I think that you should really search for what's causing the problem. Are your queries (or some of them) running very long?Spectacled
may be thats the real reason, a query which is taking to much time to execute, i will search in that, thanksScavenger
I hope you can find the problem. A suggestion: if you're using SQL Server, try the "SQL Profiler" and look for long queries: sql-server-performance.com/articles/per/…Spectacled
Could be your statistics that need to be updated.Chaconne
S
281

In most cases connection pooling problems are related to connection leaks. Your application probably doesn't close its database connections correctly and consistently. When you leave connections open, they remain blocked until the .NET garbage collector closes them for you by calling their Finalize() method.

You want to make sure that you are really closing the connection. For example the following code will cause a connection leak, if the code between .Open and Close throws an exception:

var connection = new SqlConnection(connectionString);

connection.Open();
// some code
connection.Close();                

The correct way would be this:

var connection = new SqlConnection(ConnectionString);

try
{
     connection.Open();
     someCall (connection);
}
finally
{
     connection.Close();                
}

or

using (SqlConnection connection = new SqlConnection(connectionString))
{
     connection.Open();
     someCall(connection);
}

When your function returns a connection from a class method make sure you cache it locally and call its Close method. You'll leak a connection using this code for example:

var command = new OleDbCommand(someUpdateQuery, getConnection());

result = command.ExecuteNonQuery();
connection().Close(); 

The connection returned from the first call to getConnection() is not being closed. Instead of closing your connection, this line creates a new one and tries to close it.

If you use SqlDataReader or a OleDbDataReader, close them. Even though closing the connection itself seems to do the trick, put in the extra effort to close your data reader objects explicitly when you use them.


This article "Why Does a Connection Pool Overflow?" from MSDN/SQL Magazine explains a lot of details and suggests some debugging strategies:

  • Run sp_who or sp_who2. These system stored procedures return information from the sysprocesses system table that shows the status of and information about all working processes. Generally, you'll see one server process ID (SPID) per connection. If you named your connection by using the Application Name argument in the connection string, your working connections will be easy to find.
  • Use SQL Server Profiler with the SQLProfiler TSQL_Replay template to trace open connections. If you're familiar with Profiler, this method is easier than polling by using sp_who.
  • Use the Performance Monitor to monitor the pools and connections. I discuss this method in a moment.
  • Monitor performance counters in code. You can monitor the health of your connection pool and the number of established connections by using routines to extract the counters or by using the new .NET PerformanceCounter controls.
Spectacled answered 22/3, 2009 at 10:55 Comment(5)
A small correction: the GC never calls an object's Dispose method, only its finalizer (if there is one). The finalizer can then do a "fallback" call to Dispose, if necessary, although I'm not certain whether SqlConnection does so.Philipson
Is there any kind of performance degradation when we have to set Max Pool size 50 and we have only few users.Vena
Just in case someone needs it: in my case it was because of n+1 problem and too many threads running at the same time which led to a mass amount of requests to a MS SQL DB at the same time.Myelitis
@ConstantineKetskalo How did you solve this problem where too many threads runnign at the same time trying to open the connectionHindquarter
@RRN, optimized the code: made it send less requests by pulling array of data instead of each record separately. Also I've limited amount of threads with a semaphore since too many threads leads to server spending more resources on managing threads rather then on a real job. Amount of threads has nothing to do with the DB error though, it's rather about efficiency of a code.Myelitis
S
52

Upon installing .NET Framework v4.6.1 our connections to a remote database immediately started timing out due to this change.

To fix simply add the parameter TransparentNetworkIPResolution in the connection string and set it to false:

Server=myServerName;Database=myDataBase;Trusted_Connection=True;TransparentNetworkIPResolution=False

Saguenay answered 31/5, 2016 at 22:53 Comment(5)
In this case the issue is "The timeout period elapsed prior to obtaining a connection from the pool". Did your connection string fix resolve this, or was it a separate handshaking issue?Isla
From what I remember it was the exact same error message as in the question. It occurred immediately after updating to .NET Framework v4.6.1.Saguenay
This was the case for me too, fixed it for me on an App Service I was running on Azure, connection to an Azure SQL database. I was using Dapper and correctly disposing of connections, yet still got the "timeout period elapsed prior to obtaining a connection from the pool" error message. But no more, so thanks @SaguenayCrowell
Would switching the connection string from the NETBIOS name to the IP Address also solve this issue?Trutko
This Microsoft article explains why this setting might help techcommunity.microsoft.com/t5/sql-server-support/…Shiller
N
19

Yet another reason happened in my case, because of using async/await, resulting in the same error message:

System.InvalidOperationException: 'Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.'

Just a quick overview of what happened (and how I resolved it), hopefully this will help others in the future:

Finding the cause

This all happened in an ASP.NET Core 3.1 web project with Dapper and SQL Server, but I do think it is independent of that very kind of project.

First, I have a central function to get me SQL connections:

internal async Task<DbConnection> GetConnection()
{
    var r = new SqlConnection(GetConnectionString());
    await r.OpenAsync().ConfigureAwait(false);
    return r;
}

I'm using this function in dozens of methods like e.g. this one:

public async Task<List<EmployeeDbModel>> GetAll()
{
    await using var conn = await GetConnection();
    var sql = @"SELECT * FROM Employee";

    var result = await conn.QueryAsync<EmployeeDbModel>(sql);
    return result.ToList();
}

As you can see, I'm using the new using statement without the curly braces ({, }), so disposal of the connection is done at the end of the function.

Still, I got the error about no more connections in the pool being available.

I started debugging my application and let it halt upon the exception happening. When it halted, I first did a look at the Call Stack window, but this only showed some location inside System.Data.SqlClient, and was no real help to me:

enter image description here

Next, I took a look at the Tasks window, which was of a much better help:

enter image description here

There were literally thousands of calls to my own GetConnection method in an "Awaiting" or "Scheduled" state.

When double-clicking such a line in the Tasks window, it showed me the related location in my code via the Call Stack window.

This helped my to find out the real reason of this behaviour. It was in the below code (just for completeness):

[Route(nameof(LoadEmployees))]
public async Task<IActionResult> LoadEmployees(
    DataSourceLoadOptions loadOption)
{
    var data = await CentralDbRepository.EmployeeRepository.GetAll();

    var list =
        data.Select(async d =>
            {
                var values = await CentralDbRepository.EmployeeRepository.GetAllValuesForEmployee(d);
                return await d.ConvertToListItemViewModel(
                    values,
                    Config,
                    CentralDbRepository);
            })
            .ToListAsync();
    return Json(DataSourceLoader.Load(await list, loadOption));
}

In the above controller action, I first did a call to EmployeeRepository.GetAll() to get a list of models from the database table "Employee".

Then, for each of the returned models (i.e. for each row of the result set), I did again do a database call to EmployeeRepository.GetAllValuesForEmployee(d).

While this is very bad in terms of performance anyway, in an async context it behaves in a way, that it is eating up connection pool connections without releasing them appropriately.

Solution

I resolved it by removing the SQL query in the inner loop of the outer SQL query.

This should be done by either completely omitting it, or if required, move it to one/multilpe JOINs in the outer SQL query to get all data from the database in one single SQL query.

tl;dr / lessons learned

Don't do lots of SQL queries in a short amount of time, especially when using async/await.

Nodab answered 28/8, 2020 at 6:25 Comment(0)
B
15

Unless your usage went up a lot, it seems unlikely that there is just a backlog of work. IMO, the most likely option is that something is using connections and not releasing them promptly. Are you sure you are using using in all cases? Or (through whatever mechanism) releasing the connections?

Bucentaur answered 22/3, 2009 at 9:58 Comment(0)
C
15

Did you check for DataReaders that are not closed and response.redirects before closing the connection or a datareader. Connections stay open when you dont close them before a redirect.

Ciccia answered 22/3, 2009 at 10:5 Comment(3)
+1 - Or functions returning DataReaders - Connection will never close outside the function you created them...Spectacled
If your function returns SqlDataReader you are better off converting it to DataTable than raising the max pool sizeSuperego
However, if i use "commandbehavior.closeconnection" in "ExecuteReader()" function, it will close the connection once close method of reader is called.Unscreened
C
13

We encounter this problem from time to time on our web site as well. The culprit in our case, is our stats/indexes getting out of date. This causes a previously fast running query to (eventually) become slow and time out.

Try updating statistics and/or rebuilding the indexes on the tables affected by the query and see if that helps.

Chaconne answered 14/6, 2009 at 22:51 Comment(3)
I think that this would explain why a query could time out, but I don't think that this would explain why a time out is experienced whilst trying to obtain a connection.Cigarillo
It could be that with a bad index the queries take longer and more connections are used at the same time.Jailbreak
Worked for me after updating all statistics with sp_updatestats on a db: EXEC sp_updatestats;Inarticulate
S
13

You can try the following too, to solve timeout problem:

If you didn't add httpRuntime to your web.config, add that in <system.web> tag:

<sytem.web>
    <httpRuntime maxRequestLength="20000" executionTimeout="999999"/>
</system.web>

and modify your connection string like this:

<add name="connstring" 
     connectionString="Data Source=DSourceName;Initial Catalog=DBName;Integrated Security=True;Max Pool Size=50000;Pooling=True;"
     providerName="System.Data.SqlClient" />

At last use:

try
{
    // ...
} 
catch
{
    // ...
} 
finally
{
    connection.close();
}
Supersonic answered 2/1, 2018 at 14:37 Comment(0)
S
9

While this is quite a bit later to the party, I recently had this error and nothing I found searching this problem gave me any insight other than connection leaks -- which was not my problem.

Incase this may help anyone else, I'll explain the problem, the solution, and some helpful things I found along the way since I couldn't find it anywhere else.

In my case, the underlying issue was due to not annotating my model properties for key columns which were strings. Take for example:

public class InventoryItem
{
    public string StateCode {get;set;}
    public string CompanyId {get;set;}
    public int Id {get;set;}
    // more properties removed for simplicity
}

in my DbContext, I configured the key column to be a compound key.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<InventoryItem>().HasKey(x => new { x.StateCode, x.CompanyId, x.Id });    
}

SQL Table definition

CREATE TABLE [Example].[InventoryItem] (
    [Id] INT NOT NULL,
    [StateCode] VARCHAR (2) NOT NULL,
    [CompanyId] VARCHAR (50) NOT NULL,
    CONSTRAINT [PK_InventoryItems] PRIMARY KEY CLUSTERED ([StateCode] ASC, [CompanyId] ASC, [Id] ASC)
)

C# querying code

public List<InventoryItem> GetAllCompanyInventory(string stateCode, string companyId)
{
    using (var context = ContextFactory.CreateContext())
    {
        return await context.InventoryItems.Where(x => x.StateCode == stateCode && x.CompanyId == companyId).ToListAsync().ConfigureAwait(false);
    }
}

In this case, what was happening, is the stateCode and companyId were being parameterized as NVARCHAR(450) and the query was using CONVERT_IMPLICIT which effectively caused the system to not be able to use indexes correctly. My CPU and worker time went to 100% and somehow that translated into connection pool issues. After adding annotations to these properties, my CPU never went above 5% and never saw a connection pool issue again. Here are some of the things that helped me identify this issue.

The fix was to annotate these properties

public class InventoryItem
{
    [System.ComponentModel.DataAnnotations.Schema.Column(TypeName = "varchar(2)")]
    public string StateCode {get;set;}
    [System.ComponentModel.DataAnnotations.Schema.Column(TypeName = "varchar(50)")]
    public string CompanyId {get;set;}
    public int Id {get;set;}
    // more properties removed for simplicity
}

After I made this change, the server was pretty much sleeping. Running sp_who showed a much smaller number of connections and almost all of them were sleeping.

Somewhere along the way, someone mentioned using sp_who to see the connections active. I could see connections go from 0 to max allowed as soon as I started running my code locally all listing my hostname and were running status. So I thought my code had an error, but literally every single place we create a context was wrapped properly by a using statement. Was losing my mind. I then shifted my thinking to maybe this is a symptom and not the actual problem. Which leads me to the path to finding the problem.

I found this article that eventually lead to this SQL statement which REALLY helped me identify what was happening.

SELECT TOP 20
    (total_logical_reads/execution_count) AS avg_logical_reads,
    (total_logical_writes/execution_count) AS avg_logical_writes,
    (total_physical_reads/execution_count) AS avg_phys_reads,
    (total_worker_time/execution_count) AS avg_cpu_over_head,
total_logical_reads, total_logical_writes, total_physical_reads,
total_worker_time, execution_count, total_elapsed_time AS Duration,
plan_generation_num AS num_recompiles,
statement_start_offset AS stmt_start_offset,
    (SELECT SUBSTRING(text, statement_start_offset/2 + 1,
        (CASE WHEN statement_end_offset = -1
            THEN LEN(CONVERT(nvarchar(MAX),text)) * 2
                ELSE statement_end_offset
            END - statement_start_offset)/2)
     FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,
      (SELECT query_plan FROM sys.dm_exec_query_plan(plan_handle)) AS query_plan
FROM sys.dm_exec_query_stats a
--JUST CHANGE THE ORDER BY TO GET THE OTHER RESOURCES
ORDER BY (total_logical_reads + total_logical_writes)/execution_count DESC

From here, I could get the query text and query plans that were having problems. I noticed that some of the output had total worker time of > 4 Billion compared to other heavy queries which were around 300 Million. These super high worker times were not expected because they had exactly the index key in the search predicate. Looking at the actual query plan, I was able to see a few things:

  1. Extremely high estimated number of rows to be read (eg, > 10 Million when estimated rows per execution was only 1)
  2. Use of CONVERT_IMPLICIT
  3. Recommendation to create a different index, but effectively the one that I already had in place (somewhat complicated to explain with the simplified example, but the actual index has more columns and it was omitting the first two columns to avoid the conversion)

This article helped me understand why my estimated number of rows was so high even when there was an index using the exact keys I was searching for.

This article helped me understand the CONVERT_IMPLICIT calls in the query plans I was reviewing.

I found the above article because it was was mentioned in this post.

This helped me figure out how to change the default from nvarchar to varchar.

I'm still not positive why this was causing the connection pool timeout issue. I had some tests running that would execute one state code + company id at a time and it was still hitting this error pretty much right away. Once I changed it to have the annotations, everything cleared up even when processing many requests simultaneously.

Serranid answered 20/2, 2023 at 4:59 Comment(0)
A
7

I just had the same problem and wanted to share what helped me find the source: Add the Application name to your connection string and then monitor the open connections to the SQL Server

select st.text,
    es.*, 
    ec.*
from sys.dm_exec_sessions as es
    inner join sys.dm_exec_connections as ec on es.session_id = ec.session_id
    cross apply sys.dm_exec_sql_text(ec.most_recent_sql_handle) st
where es.program_name = '<your app name here>'
Angus answered 18/2, 2019 at 11:37 Comment(0)
E
6

You can specify minimum and maximum pool size by specifying MinPoolSize=xyz and/or MaxPoolSize=xyz in the connection string. This cause of the problem could be a different thing however.

Ellingson answered 22/3, 2009 at 9:53 Comment(4)
Whats the recommended MaxPoolSize?Scavenger
Probably, the best way to go is not to specify it unless you have special requirements and you know a good pool size for your specific case.Ellingson
Note: i checked and i found that the active connections to my db are around 22 live one, is that too much?Scavenger
I don't think so. The default pool size is 100 connections I think. It depends on the load of each connection on the network and SQL server. If those are running heavy queries, it might cause problems. Also, network issues might occur when initiating a new connection and may cause that exception.Ellingson
N
5

I have encountered this problem too, when using some 3rd party data layer in one of my .NET applications. The problem was that the layer did not close the connections properly.

We threw out the layer and created one ourselves, which always closes and disposes the connections. Since then we don't get the error anymore.

Navigate answered 22/3, 2009 at 10:50 Comment(1)
We are using LLBL and the website is running from 2 years and just the last few days started to act like this.Scavenger
L
5

In my case, I was not closing the DataReader object.

using (SqlCommand dbCmd = new SqlCommand("*StoredProcedureName*"))
using (dbCmd.Connection = new SqlConnection(WebConfigurationAccess.ConnectionString))
{
    dbCmd.CommandType = CommandType.StoredProcedure;

    //Add parametres
    dbCmd.Parameters.Add(new SqlParameter("@ID", SqlDbType.Int)).Value = ID;

    .....
    .....

    dbCmd.Connection.Open();
    var dr = dbCmd.ExecuteReader(); //created a Data reader here
    dr.Close();    //gotta close the data reader
    //dbCmd.Connection.Close(); //don't need this as 'using' statement should take care of this in its implicit dispose method.
}
Lesotho answered 24/8, 2015 at 15:31 Comment(0)
M
5

In addition to the posted solutions.

In dealing with 1000 pages of legacy code, each calling a common GetRS multiple times, here is another way to fix the issue:

In an existing common DLL, we added the CommandBehavior.CloseConnection option:

static public IDataReader GetRS(String Sql)
{
    SqlConnection dbconn = new SqlConnection(DB.GetDBConn());
    dbconn.Open();
    SqlCommand cmd = new SqlCommand(Sql, dbconn);
    return cmd.ExecuteReader(CommandBehavior.CloseConnection);   
}

Then in each page, as long as you close the data reader, the connection is also automatically closed so connection leaks are prevented.

IDataReader rs = CommonDLL.GetRS("select * from table");
while (rs.Read())
{
    // do something
}
rs.Close();   // this also closes the connection
Musteline answered 31/5, 2018 at 5:32 Comment(0)
D
3

This is mainly due to the connection not been closed in the application. Use "MinPoolSize" and "MaxPoolSize" in the connection string.

Dandruff answered 31/7, 2013 at 5:57 Comment(0)
I
2

If you are working on complex legacy code where a simple using(..) {..} isn't possible - as I was - you may want to check out the code snippet I posted in this SO question for a way to determine the call stack of the connection creation when a connection is potentially leaked (not closed after a set timeout). This makes it fairly easy to spot the cause of the leaks.

Incongruous answered 21/2, 2013 at 13:55 Comment(0)
S
2

Don't instantiate the sql connection too much times. Open one or two connections and use them for all next sql operations.

Seems that even when Disposeing the connections the exception is thrown.

Staurolite answered 13/6, 2016 at 10:39 Comment(0)
L
1

Use this:

finally
{
    connection.Close();
    connection.Dispose();
    SqlConnection.ClearPool();
}
Lyckman answered 20/7, 2014 at 12:50 Comment(1)
Maybe I am missing the point of SqlConnection.ClearPool, but doe that just prevent your current connection being released back to the connection pool? I thought the idea of the connection pool was to allow quicker connections. Surely releasing the connection from the pool each time it is finished with means a NEW connection will need to be created EVERY TIME one is needed, instead of pulling a spare one from the pool? Please explain how and why this technique is useful.Phalanstery
A
1

This problem i had in my code. I will paste some example code i have over came below error. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.

 String query = "insert into STATION2(ID,CITY,STATE,LAT_N,LONG_W) values('" + a1 + "','" + b1 + "','" + c1 + "','" + d1 + "','" + f1 + "')";
    //,'" + d1 + "','" + f1 + "','" + g1 + "'

    SqlConnection con = new SqlConnection(mycon);
    con.Open();
    SqlCommand cmd = new SqlCommand();
    cmd.CommandText = query;
    cmd.Connection = con;
    cmd.ExecuteNonQuery();
    **con.Close();**

You want to close the connection each and every time. Before that i didn't us the close connect due to this i got error. After adding close statement i have over came this error

Aubry answered 18/11, 2017 at 11:41 Comment(0)
P
1

I was facing the same problem, after hours of research I realized I was connected on Guest network without VPN so setting up VPN did the trick for me

Perineurium answered 15/1, 2021 at 9:12 Comment(0)
C
1

I also got this exact error log on my AWS EC2 instance.

There were no connection leaks since I was just deploying the alpha application (no real users), and I confirmed with Activity Monitor and sp_who that there are in fact no connections to the database.

My issue was AWS related - more specifically, with the Security Groups. See, only certain security groups had access to the RDS server where I hosted the database. I added an ingress rule with authorize-security-group-ingress command to allow access to the correct EC2 instance to the RDS server by using --source-group-name parameter. The ingress rule was added, I could see that on the AWS UI - but I got this error.

When I removed and then added the ingress rule manually on AWS UI - suddenly the exception was no more and the app was working.

Clubhaul answered 22/1, 2021 at 17:41 Comment(0)
H
1

For EntityFramework This could be a case of connection overload. You just need to fix the connection string add this field to the connection string:

=> Max Pool Size=200

Example:

<add name="DataProvider" connectionString="Data Source=.;Initial Catalog=dbname;User ID=dbuser;Password=dbpassword;Max Pool Size=200" providerName="System.Data.SqlClient" />
Hammad answered 10/10, 2022 at 16:21 Comment(1)
You are my savior. Thanks!Stria
E
0

This problem I have encountered before. It ended up being an issue with the firewall. I just added a rule to the firewall. I had to open port 1433 so the SQL server can connect to the server.

Extinguish answered 5/10, 2017 at 19:0 Comment(0)
S
0

In my case, I had infinite loop (from a get Property trying to get value from database) which kept opening hundreds of Sql connections.

To reproduce the problem try this:

while (true)
{
    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        connection.Open();
        someCall(connection);
    }
}
Sipe answered 14/12, 2018 at 15:23 Comment(0)
S
0

Yes, There is a way to change configuration. If you are on a dedicated server and simply need more SQL connections, you may update the "max pool size" entries in both connection strings by following these instructions:

  1. Log into your server using Remote Desktop
  2. Open My Computer (Windows - E) and go to C:\inetpub\vhosts[domain]\httpdocs
  3. Double click on the web.config file. This may just be listed as web if the file structure is set to hide extensions. This will open up Visual Basic or similar editor.
  4. Find your Connection Strings, these will look similar to the examples below :

    "add name="SiteSqlServer" connectionString="server=(local);database=dbname;uid=dbuser;pwd=dbpassword;pooling=true;connection lifetime=120;max pool size=25;""

5.Change the max pool size=X value to the required pool size.

  1. Save and close your web.config file.
Shanklin answered 15/11, 2019 at 6:37 Comment(0)
C
0

Make sure you set up the correct settings for connection pool. This is very important as I have explained in the following article: https://medium.com/@dewanwaqas/configurations-that-significantly-improves-your-app-performance-built-using-sql-server-and-net-ed044e53b60 You will see a drastic improvement in your application's performance if you follow it.

Concelebrate answered 18/3, 2020 at 18:56 Comment(0)
E
0

I wasn't thinking this was my issue at first but in running through this list I discovered that it didn't cover what my issues was.

My issue was that I had a bug in which it tried to write the same record numerous times using entity framework. It shouldn't have been doing this; it was my bug. Take a look at the data you are writing. My thoughts are that SQL was busy writing a record, possibly locking and creating the timeout. After I fixed the area of code that was attempting to write the record multiple in sequential attempts, the error went away.

Eclectic answered 29/1, 2021 at 16:33 Comment(0)
W
0

You need to tell the database server to close the connection when the page is unloaded.

protected void Page_Unload(object sender, EventArgs e) {
    if (qryCnn != null) 
        qryCnn.Close();
}
Whaleback answered 14/11, 2022 at 7:41 Comment(2)
It seems you need more context in this answer to give a full solution. Please add more context and a more complete solution. Based on the question this seems to not match.Faxon
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.Socialite
I
0

For me, using Entity Framework, I miss understood intellisense advice that I could simplify the using statement. I incorrectly simplified by dropping the using statement all together

ie

using (var db = new MyDbContext()){}

simplifies to

using var db = new MyDbContext();

not

var db = new MyDbContext();
Insistence answered 1/12, 2022 at 22:26 Comment(0)
H
0

I am oppening multiple connections to an SQL Server which is far away (8000km) and was having the same error. None of the previous answers soved it. I finally succed solving it by adding the parameter "Connection Timeout" to the Connection String.

eg. "Data Source=ServerXX;Initial Catalog=XX;User ID=XX;Password=XX;Connection Timeout=300"

The default connection timeout is 15 and some of my connections were not able to get a connection in less than 15 seconds, thus raising this error. Incrementing it to 300 solved it.

Hotheaded answered 9/11, 2023 at 17:57 Comment(0)
C
-1

You have leaked connections on your code. You may try to use using to certify that you're closing them.

using (SqlConnection sqlconnection1 = new SqlConnection(“Server=.\\SQLEXPRESS ;Integrated security=sspi;connection timeout=5”)) 
{
    sqlconnection1.Open();
    SqlCommand sqlcommand1 = sqlconnection1.CreateCommand();
    sqlcommand1.CommandText = “raiserror (‘This is a fake exception’, 17,1)”;
    sqlcommand1.ExecuteNonQuery();  //this throws a SqlException every time it is called.
    sqlconnection1.Close(); //Still never gets called.
} // Here sqlconnection1.Dispose is _guaranteed_

https://blogs.msdn.microsoft.com/angelsb/2004/08/25/connection-pooling-and-the-timeout-expired-exception-faq/

Carabiniere answered 15/8, 2016 at 13:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.