By design, the proxy object (or class) generated on the client side is distinct, although identical, to the object used by the corresponding web-method on server side.
For example, when the server side serializes & returns an object A, it will be received & deserialized on the client end within a generated corresponding object B having an identical structure to the object A (same member, same sub-classes, etc), but within a different namespace specific to the client-side web-service's reference and likely with a different name. There are good reasons for that, but there is no point discussing it here.
As pointed out earlier in the thread, there might be a lot of useful code developed around object A, and it is a pity to re-implement it (or simply duplicate source code). Besides we want to avoid code duplication as much as possible to facilitate further maintenance.
The solution of fiddling within the auto-generated de-serialization of the client-stub (soap client) is even more risky on the long run because it requires you to apply same manipulations for each subsequent client-stubs re-synchronizations with each new versions of the server-side. Besides, I understand that it was, at some point, possible in 2008, but what guaranties do we have that it will continue to work the same way with subsequent Visual Studio versions? I couldn't do it within VS2019.
There is a simple (and safe) solution for lazy (and cautious) people like me; here is how:
After the client-stub receives the object B (corresponding to object A of server side), it is possible to swiftly "transform" it back into an object A. You just need to do a "Deep-Copy" from object A to object B! If objects A (and B) are complex enough you might consider a generic Deep-Copy using "System.Reflection" to match each sub-fields by their name OR you can use a generic Serializer/Deserializer to Convert the received object B to text, then convert this text back into an object A. For instance, I did the job in few lines of code using Newtonsoft Nuget package.
See the below example, in the .asmx
services' page on server side:
[WebMethod]
public ObjectTypeA WebMethodPeek()
{
ObjectTypeA instanceA = GetSerializableObjectA();
return instanceA;
}
While of the web-service client side:
using Newtonsoft.Json;
public ObjectTypeA Peek()
{
SoapClient cli = new SoapClient("my-end-point-name");
ObjectTypeB instanceB = cli.WebMethodPeek();
string text = JsonConvert.SerializeObject(instanceB);
ObjectTypeA instanceA = JsonConvert.DeserializeObject<ObjectTypeA>(text);
return instanceA ;
}
Thanks to Enrique Reyes for the simple and elegant "DeepCopy" answer he provided in this post How to deep copy between objects of different types in C#.NET)
After writing this answer, I found the same strategy (...somehow and without much emphasis) also explained in a different way in this post Serializing/deserializing System.Object using a different type
Addemdum
After some hands-on work I found few surmountable annoyances with this method of using this "JSon Serialization/Deserialization" to do a deep copy of a identical (or almost) data-structures.
Expect a larger foot print for large objects with many sub-classes and properties. When adding a web-service reference on client side, not only it replicates the structure hierarchy (with different namespaces) for web-methods' arguments, but it replicates another one if used as a return type. Counting for the original one (which is shared via common project library) we end up with 3 copies of same large serializable structure.
Then I encountered an obstacle with polymorphic sub-classes which confuses the deserializer. That's because, in my case, those are loosly typed as "object" in order to hold different derived classes disregarding their types. In such case we still can write a set of callback functions to do the patch work in a class derived from JsonConverter
. Nothing absolutly needs to be done on server side.
But overall everything worked out despite the large size and complexity of the exchanged objects. I still use xsd.exe
to generate fully serializable class for larger, and more complex, objects.
Another strategy yet simpler: using binary serialization
Afterwards I found much simpler to exchange binary objects (i.e. byte[]
) involving only .NET Framework. It allows for using the same class shared through a common library between the server side and client side. On either side we need to serialize before sending and deserialize what we receive.
I made up some very simplified examples omitting whatever is not necessary for understanding.
Required namespaces:
using System.IO;
using System.Runtime.Serialization.Formatters.Binary;
Example client receiving from server:
SharedClass myClass = null;
byte[] byteArray = my_WebService.GetInfo();
BinaryFormatter formatter = new BinaryFormatter();
using (MemoryStream stream = new MemoryStream(byteArray)) {
object obj = formatter.Deserialize(stream);
myClass = obj as SharedClass ;
}
Example client sending to server:
byte[] byteArray = null;
SharedClass myClass = new SharedClass(xyz);
BinaryFormatter formatter = new BinaryFormatter();
using (MemoryStream stream = new MemoryStream()) {
formatter.Serialize(stream, myClass);
byteArray = stream.ToArray();
}
my_WebService.SendInfo(byteArray);
On server side it's pretty much the same thing
server sending to client:
[WebMethod]
public byte[] GetInfo()
{
SharedClass myClass = new SharedClass(xyz);
BinaryFormatter formatter = new BinaryFormatter();
using (MemoryStream stream = new MemoryStream()) {
formatter.Serialize(stream, myClass);
return (stream.ToArray());
}
}
server receiving from client:
[WebMethod]
public void SendInfo(byte[] byteArray)
{
SharedClass myClass = null;
BinaryFormatter formatter = new BinaryFormatter();
using (MemoryStream stream = new MemoryStream(byteArray)) {
object obj = formatter.Deserialize(stream);
myClass = obj as SharedClass;
}
}
This works swiftly without much work. But:
- Ensure that the shared structure is always the same version (or more precisely: just the same), otherwise the deserialization will break
- If the deserialization does not break. Still do minimum data integrity validations even when receiving data from trusted users. Look for "out of range" values, corrupted dates, etc
- Security wise. Many are not comfortable (often freaking out) exchanging data through web-services in binary for security reasons. It always depends of what you do with the received data. If you just save the binary blog as is, it's worrying indeed. But it might be completely irrelevant if it's structured data, that is decomposed in smaller data elements and the re-assembled into a structured object. Thus, illegitimate data is very likely to fail even basic types validation. In the worst case you ended up with some corrupted "valid" data, which you are getting anyway from trusted users with good intentions.
- Having said that, Web-Services should have a minimum access control: firewall, ip-filtering, username/password, etc