Edit: To avoid confusion: This is about the table that was formerly called, or is still called, Microsoft Surface 1.0. It is not about the table that used to be called Microsoft Surface 2.0, and it is not about the tablet computer that is now called Microsoft Surface. End of edit
I'm writing a WPF application that should run both on desktop systems as well as on MS Surface/PixelSense 1.0. I am looking for conventions on how this is usually done.
I am aware there are some differences between the platforms, which is why the basic GUI skeletons are different for the desktop and the PixelSense version (in this case, a Canvas
in the desktop version and a ScatterView
in the PixelSense version as the root GUI element).
However, there are many WPF-based user controls/GUI fragments in the desktop version that should appear more or less the same way in the PixelSense version.
Unfortunately, standard WPF controls do not seem to work in PixelSense. Controls such as CheckBox
have to be replaced with SurfaceCheckBox
in order to react to user input, as can be easily verified with this little code sample on PixelSense:
var item = new ScatterViewItem();
var sp = new StackPanel();
item.Content = sp;
item.Padding = new Thickness(20);
sp.Children.Add(new CheckBox() { Content = "CheckBox" });
sp.Children.Add(new SurfaceCheckBox() { Content = "SurfaceCheckBox" });
myScatterView.Items.Add(item);
Apparently, this means that the WPF user controls cannot be displayed on PixelSense without any changes, as is confirmed by statements in resources such as Microsoft's documentation on the PixelSense presentation layer, which SO questions such as this one related to a WPF tree view also refer to, this blogpost on the PixelSense WPF layer or this SO question on how to rewrite a WPF desktop application for PixelSense. The latter page even calls the required changes minimal, but still, they are changes.
Furthermore, reactions to this SO question on how to use a particular WPF desktop control on PixelSense imply that using .NET 4.0 might simplify things, but I don't think .NET 4.0 is supported by the PixelSense 1.0 SDK (it is .NET 3.5-based, as far as I can tell).
As a software engineer, I still cannot agree with the strategy of writing the same GUI fragments (consisting of basically the same controls in the same layout with the same behavior towards the data model) using the same programming language twice. This just seems wrong.
So, three possible solutions that I have found so far:
- Write the GUI fragments for standard WPF. Then copy the libraries that contains those GUI fragments using scripts, while transforming all relevant Xaml files (e.g. with XSLT) in a way to replace standard WPF controls with their
Surface*
counterparts.
Drawbacks:- Increased maintenance required for always keeping the scripts working and running them whenever something in the WPF projects has changed.
- Also, defining which Xaml files to consider and which controls to replace with what (think additional 3rd-party PixelSense controls ...) might end up complicated.
- Lastly, the generated PixelSense-Project would be an exact copy of the WPF project, save for the Xaml files, so no PixelSense-specific code could be introduced (or if it can, this would make the scripts for generating the PixelSense projects even more complex).
- Target only PixelSense/Surface, and have desktop users install the Surface SDK. Thanks to user Clemens for the suggestion!
Drawbacks:- Users have to install the Surface 1.0 SDK, which is - on current systems - a non-trivial operation: For getting the Surface 1.0 SDK to run on a Windows 7 64 Bit machine, various actions such as patching MSI files have to be executed.
- The PixelSense/Surface 1.0 simulator is the only possibility of running Surface 1.0 applications on a desktop computer. This simulator is not very well usable, and downright buggy on some systems: On my computer, it appears like this: (The output covers only some 3/4 of the simulator window, but the input is registered in the whole window. i.e. to click on the bottom right corner of the Surface simulation, I have to click into the (apparently transparent) bottom right corner of the simulator window.)
- When creating the GUI, rather than explicitly using standard WPF or PixelSense control classes, use a platform-specific factory that creates the appropriate control type.
Drawbacks:- The GUI fragments cannot be written with Xaml any more; at least most of the controls and all of their bindings have to be created in C# code.
I am currently leaning towards the third option, as it simply seems like an acceptable price to pay for being platform-independent. Yet, I think that with WPF being the connecting element between desktop and PixelSense GUIs, this should be a frequent issue, and I wonder whether this hasn't been solved before. So, I'm asking here: How is this usually done?
P.S.: Orientation is not an issue. The aforementioned GUI fragments are displayed in rotatable ScatterViewItems on PixelSense and in their normal vertical orientation on desktops.