(Original creator: eknochs)
Touch interactions in the User Interface were new several years ago, with the advent of new portable devices. But in my sphere of development technology interest, it is Windows 8 that is introducing touch interaction to a much bigger audience of developers. Yes I know Windows 7 also had touch, but how many laptops, or 27 inch workstation monitors, were touch-enabled? If you still get junk mail catalogues from electronics discounters, you will see the new models of Windows 8 machines that are more and more touch-enabled. Even the touch-pad has become more powerful. So the Operating System will confront users and teach them to accept, and even embrace, touch interactions in the UI. It is therefore inevitable that application developers will need to accept and embrace touch interactions as well. Before we start development, there are interesting design paradoxes to resolve. For example, Microsoft suggests that whilst touch interactions should be adopted in new applications, keyboard and pointing devices should continue to be fully supported**. So do you design to maximise a UI as if touch were the only interaction method, or do you compromise it to accommodate a wider choice of interaction devices (please don’t call a mouse an old or legacy device)? Another example comes from the desire to develop a single code line for all deployment scenarios.
We can handle different OS and DBMS combinations, but in the future, target screen size and resolution may vary enormously. Font and window sizes can be changed dynamically to compensate, and we can even alter the layout of objects, but the size of our (adult) fingerprint, and the distances between fingers etc., won’t change to suit. Suppose that we are ready to develop, and we have all the software tools that we need. Unfortunately we will have to learn new terms and concepts. We will have to understand the differences between gestures, manipulations and interactions. The MSDN reference below can be expanded to find Microsoft’s definitions as an example. If you look at gestures, you can see that Slide and Swipe are considered different gestures. How can that be? One of the interactions is Zoom … sounds intuitive, but there are 3 kinds; Optical zooming, Semantic zooming and Resizing. OK, we can learn all this, but unlike a user of gestures in modern applications, where the attraction is speed and intuitive actions, programming for it can be the opposite, at first. I am confident that we will be developing interesting applications in the future that will include touch interactions. This will be matched by various challenges along the way (functional testing sounds challenging).
BTW, a Swipe is a short distance version of a Slide – so I’d still be a bit careful about allowing both of those gestures in the same business function. ** http://msdn.microsoft.com/en-us/library/windows/apps/hh700412.aspx
This page has no comments.