Summary: I believe the best practice is to instantiate your web service client when you are about to use it, then let it go out of scope and get garbage collected. This is reflected in the samples you see coming from Microsoft. Justification follows...
Full:The best full description of the process that I have found is at How to: Access a Service from Silverlight. Here the example shows the typical pattern of instantiating the web service client and allowing it to go out of scope (without needing to close it). Web service clients inherit from ClientBase which has a Finalize method that should free any unmanaged resources if necessary when the object is garbage collected.
I have a decent amount of experience using web services, and I use proxies and instantiate them right before use, then allow them to be garbage collected. I have never had a problem with this approach. I read on Wenlong Dong's Blog which said that creation of the proxy was expensive, but even he says performance has improved in .NET 3.5 (perhaps it has improved again since then?). What I can tell you is that performance is a relative term, and unless your data being retrieved is less than trivial in size, far more time will be spent in serializing/deserializing and transport than creating the connection. This has certainly been my experience, and you are better off optimizing in those areas first.
Last, since I figure my opinions thus far may be insufficient, I wrote a quick test. I created a Silverlight enabled web service using the template provided with Visual Web Developer 2010 Express (with a default void method called DoWork()
). Then in my sample Silverlight client I called it using the following code:
int counter=0;
public void Test()
{
ServiceReference1.Service1Client client = new ServiceReference1.Service1Client();
client.DoWorkCompleted += (obj, args) =>
{
counter++;
if (counter > 9999)
{
for(int j=0;j<10;j++) GC.Collect();
System.Windows.MessageBox.Show("Completed");
}
};
client.DoWorkAsync();
}
I then called the Test method using for(int i=0;i<10000;i++) Test();
and fired up the application. It took a little over 20 seconds to load up the app & complete the web service calls (all 10,000 of them). As the web service calls were being made I saw the memory usage for the process jump to over 150MB, but once the calls completed and GC.Collect()
was called the memory usage dropped to less than half that amount. Far from being a perfect test it seems to confirm to me that no memory was leaking, or it was negligible (considering it is probably uncommon to call 10,000 web service calls all using separate client instances). Also it is a much simpler model than keeping a proxy object around and having to worry about it faulting and having to reopen it.
Justification of Test Methodology:
My test focused on 2 potential problems. One is a memory leak, and the other is processor time spent creating and destroying the objects. My recommendation is that it is safe to follow the examples provided by the company (Microsoft) who supplies the classes. If you are concerned about network efficiency, then you should have no problem with my example since properly creating/disposing these objects would not effect network latency. If 99% of the time spent is network time, then optimizing for a theoretical improvement in the 1% is probably wasteful in terms of development time(assuming there is even a benefit to be gained which I believe my test clearly shows there is little/none). Yes, the networking calls were local which is to say that over the course of 10,000 service calls, only about 20 seconds will be spent waiting for the objects. That represents ~2 milliseconds per service call spent on creating the objects. Regarding the need to call Dispose, I didn't mean to imply that you shouldn't call it, merely that it didn't appear necessary. If you forget (or simply choose not to), my tests led me to believe that Dispose was being called in the Finalize for these objects. Even so, it would probably be more efficient to call Dispose yourself, but still the effect is negligible. For most software development you get more gains from coming up with more efficient algorithms and data structures than by pining over issues like these (unless there is a serious memory leak). If you require more efficiency, then perhaps you shouldn't be using web services since there are more efficient data transit options than a system that is based on XML.