Conventions for WPF Applications that run on both Desktop and Surface (PixelSense) 1.0
Asked Answered
D

1

8

Edit: To avoid confusion: This is about the table that was formerly called, or is still called, Microsoft Surface 1.0. It is not about the table that used to be called Microsoft Surface 2.0, and it is not about the tablet computer that is now called Microsoft Surface. End of edit

I'm writing a WPF application that should run both on desktop systems as well as on MS Surface/PixelSense 1.0. I am looking for conventions on how this is usually done.

I am aware there are some differences between the platforms, which is why the basic GUI skeletons are different for the desktop and the PixelSense version (in this case, a Canvas in the desktop version and a ScatterView in the PixelSense version as the root GUI element).

However, there are many WPF-based user controls/GUI fragments in the desktop version that should appear more or less the same way in the PixelSense version.

Unfortunately, standard WPF controls do not seem to work in PixelSense. Controls such as CheckBox have to be replaced with SurfaceCheckBox in order to react to user input, as can be easily verified with this little code sample on PixelSense:

var item = new ScatterViewItem();
var sp = new StackPanel();
item.Content = sp;
item.Padding = new Thickness(20);
sp.Children.Add(new CheckBox() { Content = "CheckBox" });
sp.Children.Add(new SurfaceCheckBox() { Content = "SurfaceCheckBox" });
myScatterView.Items.Add(item);

Apparently, this means that the WPF user controls cannot be displayed on PixelSense without any changes, as is confirmed by statements in resources such as Microsoft's documentation on the PixelSense presentation layer, which SO questions such as this one related to a WPF tree view also refer to, this blogpost on the PixelSense WPF layer or this SO question on how to rewrite a WPF desktop application for PixelSense. The latter page even calls the required changes minimal, but still, they are changes.

Furthermore, reactions to this SO question on how to use a particular WPF desktop control on PixelSense imply that using .NET 4.0 might simplify things, but I don't think .NET 4.0 is supported by the PixelSense 1.0 SDK (it is .NET 3.5-based, as far as I can tell).

As a software engineer, I still cannot agree with the strategy of writing the same GUI fragments (consisting of basically the same controls in the same layout with the same behavior towards the data model) using the same programming language twice. This just seems wrong.

So, three possible solutions that I have found so far:

  • Write the GUI fragments for standard WPF. Then copy the libraries that contains those GUI fragments using scripts, while transforming all relevant Xaml files (e.g. with XSLT) in a way to replace standard WPF controls with their Surface* counterparts.
    Drawbacks:
    • Increased maintenance required for always keeping the scripts working and running them whenever something in the WPF projects has changed.
    • Also, defining which Xaml files to consider and which controls to replace with what (think additional 3rd-party PixelSense controls ...) might end up complicated.
    • Lastly, the generated PixelSense-Project would be an exact copy of the WPF project, save for the Xaml files, so no PixelSense-specific code could be introduced (or if it can, this would make the scripts for generating the PixelSense projects even more complex).
  • Target only PixelSense/Surface, and have desktop users install the Surface SDK. Thanks to user Clemens for the suggestion!
    Drawbacks:
    • Users have to install the Surface 1.0 SDK, which is - on current systems - a non-trivial operation: For getting the Surface 1.0 SDK to run on a Windows 7 64 Bit machine, various actions such as patching MSI files have to be executed.
    • The PixelSense/Surface 1.0 simulator is the only possibility of running Surface 1.0 applications on a desktop computer. This simulator is not very well usable, and downright buggy on some systems: On my computer, it appears like this: buggy Surface 1.0 simulator (The output covers only some 3/4 of the simulator window, but the input is registered in the whole window. i.e. to click on the bottom right corner of the Surface simulation, I have to click into the (apparently transparent) bottom right corner of the simulator window.)
  • When creating the GUI, rather than explicitly using standard WPF or PixelSense control classes, use a platform-specific factory that creates the appropriate control type.
    Drawbacks:
    • The GUI fragments cannot be written with Xaml any more; at least most of the controls and all of their bindings have to be created in C# code.

I am currently leaning towards the third option, as it simply seems like an acceptable price to pay for being platform-independent. Yet, I think that with WPF being the connecting element between desktop and PixelSense GUIs, this should be a frequent issue, and I wonder whether this hasn't been solved before. So, I'm asking here: How is this usually done?

P.S.: Orientation is not an issue. The aforementioned GUI fragments are displayed in rotatable ScatterViewItems on PixelSense and in their normal vertical orientation on desktops.

Dungaree answered 29/6, 2012 at 7:40 Comment(2)
Thanks for pointing me to the renaming confusion. I've removed my answer since it doesn't fit well to your question.Campos
@Clemens: As I said, I think it is a solution that should at least be mentioned. Therefore, I have added it (with a reference to you - hope you're ok with this) to my list of possible solutions. Thanks for pointing it out :-)Dungaree
Q
7

Let's start by hopping in the wayback machine to the time when Surface 1.0 was being built...

The year is 2006. There is no concept of multitouch in the Windows operating system (or even in mainstream mobile phones!). There are no APIs that allow an application to respond to a users simultaneously interacting with multiple controls. The input routing, capturing, and focusing mechanisms built in to existing UI frameworks all basically prohibit multitouch from working in a non-crappy way. Even if those problems magically disappeared, most apps would still crash when the user did something they weren't built to handle (like click 'Save' and 'Exit' buttons at the same time) and users would be pretty cranky because the look & feel of the existing apps/controls are optimized for tiny mouse cursors instead of big sloppy fingers.

So, in 2006 I got to lead the Surface team in creating an abstraction layer on top of WPF to solve all those problems... the result is the 1.0 SDK you're asking about. We tried to minimize the software engineering concern you mention by subclassing the existing UI controls instead of creating a brand new hierarchy (this decision make developing the SDK much more difficult btw) so while you have to use instantiate different classes on each platform but after that the events and properties used in the original versions of the controls will work as expected for the Surface versions.

Fast forward to 2009... multitouch is starting to gain steam. Windows 7 adds native support for multitouch but doesn't really address the UI framework problems.

Fast forward to 2010... the WPF and Surface teams collaborate to take many of the solutions that Surface built for the problems I listed, adapt them to work with Win7's native touch stack, and ship them as a built in part of WPF 4.0... this provided multi touch input routing, events, capture, and focus but touch capabilities couldn't be simply added to the built in WPF controls without breaking tons of existing apps. So the Surface team released the "Surface Toolkit for Windows Touch" which provided WPF 4.0-compatible versions of most Surface 1.0 UI controls. But Surface 1.0 from back in 2006 wasn't compatible with this new version of WPF so while you could finally build killer touch experiences for both Windows and Surface, you still couldn't share 100% of your code.

Fast forward to 2011... Surface 2.0 (the one using PixelSense technology) is released. It uses the new WPF 4.0 features (so many of the APIs that came with the Surface 1 SDK are no longer needed) and includes what is essentially an updated version of the Surface Toolkit for Windows though now the control have been re-styled to be Metro. Finally, product timelines have sync'd up and you can use a single code base to build great touch experiences for both platforms.

Ok, now back to your present day question... you're asking about Surface 1.0 so the 2011 awesomeness doesn't really help you. You can leverage the 2010 stuff though:

  1. Build your app using the WPF 4 APIs and the Surface Toolkit for Windows Touch
  2. Write some code that takes Surface input events and routes it through WPF 4's input stack (the ability to do this isn't very widely known but it was created specifically for this type of scenario).

So how do you do those things? Good question. Fortunately, my buddy Joshua Blake has a blog post along with some reusable code to walk you through it. Take a look at http://nui.joshland.org/2010/07/sharing-binaries-between-surface-and.html.

Quimby answered 29/6, 2012 at 13:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.