Silverlight 4.0 and WCF client proxy - how to create and how to close instances
Asked Answered
S

2

29

Topic of Silverlight WCF service proxy lifecycle is not very clear to me. I have read various materials, resources, answers here, but still I don't completely understand the supposed best way to use them.

I am using custom binary binding in Silverlight 4.0 currently.

Is creation of a proxy in silverlight an expensive operation? Should we try to share proxy instance in code or create new is better? Should we do locking if we do share in case multiple threads access it?

Since an error on a proxy will fault the state of a proxy I think sharing a proxy isn't a good idea, but Ive read that creation is expensive, so its not 100% clear what to do here.

And with closing - silverlight WCF service clients only provide CloseAsync method. Also proxies require certain logic to be used when they are closed ( if they are faulted we should call Abort() which is synchronous in Silverlight and if not we should CloseAsync which is not synchronous or what?).

In many official Silverlight samples from MS proxies are not closed whatsoever , is that just flaw of materials or expected approach to them?

Topic is very important to me and I want a clear understanding of all things that should be considered which I currently don't have.

( I did see that this question What is the proper life-cycle of a WCF service client proxy in Silverlight 3? appears close to mine but I cannot say I am satisfied with quality of answers)

I would really like to see sample code that uses, creates , closes etc WCF proxies, and most importantly explains, why that is the best possible way. I also think (currently believe) that because of nature of problem, there should be a single, general use best practice/pattern - approach to use (create,reuse,close) WCF proxies in Silverlight.

Skullcap answered 19/8, 2011 at 8:1 Comment(3)
+1 Excellent question, really wish I had an answer for you.Eamon
Weblong Dong has a good article (.NET 3.5) regarding best practices for WCF client proxy creation: blogs.msdn.com/b/wenlong/archive/2007/10/27/… .The area of WCF ChannelFactory caching is still a bit of a mystery to me (see my previous post https://mcmap.net/q/502748/-wcf-channelfactory-caching)Standoff
You may also want to look at this approach as an alternative to WCF service references: codeproject.com/KB/silverlight/ConsumingWCFServiceWithou.aspx codeproject.com/KB/silverlight/FixingAllAsync.aspx This is the approach that we have used on our latest projects.Standoff
S
8

Summary: I believe the best practice is to instantiate your web service client when you are about to use it, then let it go out of scope and get garbage collected. This is reflected in the samples you see coming from Microsoft. Justification follows...

Full:The best full description of the process that I have found is at How to: Access a Service from Silverlight. Here the example shows the typical pattern of instantiating the web service client and allowing it to go out of scope (without needing to close it). Web service clients inherit from ClientBase which has a Finalize method that should free any unmanaged resources if necessary when the object is garbage collected.

I have a decent amount of experience using web services, and I use proxies and instantiate them right before use, then allow them to be garbage collected. I have never had a problem with this approach. I read on Wenlong Dong's Blog which said that creation of the proxy was expensive, but even he says performance has improved in .NET 3.5 (perhaps it has improved again since then?). What I can tell you is that performance is a relative term, and unless your data being retrieved is less than trivial in size, far more time will be spent in serializing/deserializing and transport than creating the connection. This has certainly been my experience, and you are better off optimizing in those areas first.

Last, since I figure my opinions thus far may be insufficient, I wrote a quick test. I created a Silverlight enabled web service using the template provided with Visual Web Developer 2010 Express (with a default void method called DoWork()). Then in my sample Silverlight client I called it using the following code:

int counter=0;
public void Test()
{
    ServiceReference1.Service1Client client = new ServiceReference1.Service1Client();
    client.DoWorkCompleted += (obj, args) => 
    { 
        counter++;
        if (counter > 9999)
        {
            for(int j=0;j<10;j++) GC.Collect();
            System.Windows.MessageBox.Show("Completed");
        }
    };
    client.DoWorkAsync();
}

I then called the Test method using for(int i=0;i<10000;i++) Test(); and fired up the application. It took a little over 20 seconds to load up the app & complete the web service calls (all 10,000 of them). As the web service calls were being made I saw the memory usage for the process jump to over 150MB, but once the calls completed and GC.Collect() was called the memory usage dropped to less than half that amount. Far from being a perfect test it seems to confirm to me that no memory was leaking, or it was negligible (considering it is probably uncommon to call 10,000 web service calls all using separate client instances). Also it is a much simpler model than keeping a proxy object around and having to worry about it faulting and having to reopen it.

Justification of Test Methodology: My test focused on 2 potential problems. One is a memory leak, and the other is processor time spent creating and destroying the objects. My recommendation is that it is safe to follow the examples provided by the company (Microsoft) who supplies the classes. If you are concerned about network efficiency, then you should have no problem with my example since properly creating/disposing these objects would not effect network latency. If 99% of the time spent is network time, then optimizing for a theoretical improvement in the 1% is probably wasteful in terms of development time(assuming there is even a benefit to be gained which I believe my test clearly shows there is little/none). Yes, the networking calls were local which is to say that over the course of 10,000 service calls, only about 20 seconds will be spent waiting for the objects. That represents ~2 milliseconds per service call spent on creating the objects. Regarding the need to call Dispose, I didn't mean to imply that you shouldn't call it, merely that it didn't appear necessary. If you forget (or simply choose not to), my tests led me to believe that Dispose was being called in the Finalize for these objects. Even so, it would probably be more efficient to call Dispose yourself, but still the effect is negligible. For most software development you get more gains from coming up with more efficient algorithms and data structures than by pining over issues like these (unless there is a serious memory leak). If you require more efficiency, then perhaps you shouldn't be using web services since there are more efficient data transit options than a system that is based on XML.

Sherikasherill answered 24/8, 2011 at 2:21 Comment(4)
Sorry for delayed reply. Well its interesting that this got upvoted pretty high but I cannot say it fits my expectations for an answer to this question. Basically you are saying that it is OK not to dispose something that is SUPPOSED to be disposed and to let GC do the job.That Service1Client inherits from [i]ClientBase<TChannel> : ICommunicationObject, IDisposable[/i] and as you can see it is implementing IDisposable interface.A basic rule about disposing in .NET is that you dispose stuff when its not needed anymore, and not rely on GC to do that.Not doing so cannot be viewed as right by me.Skullcap
If you for some reason believe its right you need to provide hard evidence, not some clearly syntethic test, that is compared to ... no other alternative? You just say that 20 sec is OK by you for 10000 requests. Locally? What are the specs of the machine? etc? Also you say its simpler not to Dispose IDisposable stuff and let GC do job, yeah it is simpler! hard to argue.Skullcap
Edited my response to include a justification of my test methodology (which I believe to be sound). Sorry if this is not the answer you were looking for.Sherikasherill
Could you try this test over again using the ClientBase(Binding binding, EndpointAddress remoteAddress); constructor. Wenlong suggests that if you use this constructor, the caching will be disabled, which will have an impact on the creation time, so I'm curious to know if you actually see a performance hit when doing so.Drive
B
0

Proxy creation is not expensive relative to the call's roundtrip. I've seen commentary that says to call CloseAsync immediately after your calls to other async methods, but that seems to be flawed. In theory the close moves into a pending state and happens after your other async calls end. In practice, I've seen the calls end surprisingly quickly with a fault, and then the CloseAsync itself faults because the channel has already faulted.

What I did is create a generic Caller<TResult> class that has a number of CallAsync generic method overloads that accept arguments and a callback of type Action<TResult>. Under the covers, the Caller would wire up the XxxCompleted event and check to see if the reason for completion was due to error or not. If error, it would Abort the channel. If no error, it would call CloseAsync and then invoke the callback. This prevents "silly" errors, such as trying to use the channel in the faulted state.

All of the above assumes a create-proxy-make-call-discard-use-of-proxy model. If you had to make many calls in rapid succession, you could adapt this approach to count the number of in-flight calls, and on completion of the last one, do the close or abort then.

It's likely to be rare that proxy creation overhead on a per call or per set of related calls is going to be an issue. Only if the overhead is too high would I pursue a proxy caching or reuse strategy. Remember, the longer you hold a resource, the greater the chance of failure. In my experience at least, the shorter the lifetime, the better the perceived performance by the user, because you interrupt him/her less, you don't need expensive to maintain retry logic (or at least less of it), and you're more confident that you're not leaking memory or resources by knowing precisely when to get rid of them.

With regards to your question about examples not closing... well, those are examples, and are usually one-dimensional and incomplete because they are illustrating a general use, not all the specifics about what you need to do across an entire application's lifetime.

Barquisimeto answered 27/8, 2011 at 1:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.