For the basic scenario described in the msdn overview (under Touch and Manipulation) TouchEnter and TouchLeave are fired for every corresponding TouchDown and TouchUp respectively. Unlike the mouse, the Touch and Stylus are not constrained to maintain contact with the screen.
Is there a way to use TouchEnter and TouchLeave is to capture only when a finger is dragged into the UIElement. As these events are fired for every touchUp and touchDown, what is the best way to differentiate these events?
One strategy that would work for the single finger case, is to have a flag set on TouchDown, and check if the flag is set on TouchUp. This allows some condition checks on TouchUp. However, for multiple fingers, it isn't feasible.
There are no PreviewTouchEnter and PreviewTouchLeave events fired, only PreviewTouchDown and PreviewTouchUp. The sequence of events for a finger lowered on to a UIElement and then raised over it is as follows:
- TouchEnter
- PreviewTouchDown
- TouchDown
- PreviewTouchUp
- TouchUp
- TouchLeave
This sequence doesn't help differentiate a TouchEnter that has happened due to a finger dragged across the screen into the UIElement, from a finger that is lowered onto the UIElement directly. Am I missing something, or does the framework not support such differentiation itself?