Pinterest Stumbleupon Whatsapp
Ads by Google

In the tech space, time moves quickly; a little more than seven years ago, smartphones as we know them did not exist — now, they’re the most profitable tech industry on Earth (and so prevalent that it’s actually a problem How To Cure Smartphone Addiction (A Smartphone Detox) How To Cure Smartphone Addiction (A Smartphone Detox) Smartphone addiction is real, and you could be affected. Read More ). A consequence of this is that it’s easy to lose sight of just how revolutionary and important the technologies we use really are.

Touchscreens and multitouch interfaces are now a permanent part of the fundamental language of human-computer interaction. All future UIs will carry echoes of touch interfaces with them, in the same way that the keyboard and the mouse permanently altered the language of the interfaces that came after them. To that end, today we’ll be taking a moment to talk about how touchscreens and the interfaces they enable came to exist, and where they’re going from here.

First, though, take a moment, and watch this video:

Listen to the sound the audience makes when they witness slide to unlock and swipe to scroll for the first time. Those people were completely blown away. They have never seen anything like that before. Steve Jobs might as well have just reached through the screen and pulled a BLT out of the ether, as far as they’re concerned. These basic touch interactions that we take for granted were totally new to them, and had obvious value. So how did we get there? What had to happen to get to that particular day in 2007?

History

Surprisingly enough, the first touchscreen device was capacitive (like modern phones, rather than the resistive technology of the 1980s and 1990s) and dates back to around 1966. The device was a radar screen, used by the Royal Radar Establishment for air traffic control, and was invented by E. A. Johnson, for that purpose. The touchscreen was bulky, slow, imprecise, and very expensive, but (to its credit) remained in use until the 1990s). The technology proved to be largely impractical, and not much progress was made for almost a decade.

Ads by Google

The technology used in this kind of monotouch capacitive screen is actually pretty simple. You use a sheet of a conductive, transparent material, and you run a small current through it (creating a static field) and measure the current at each of the four corners. When an object like a finger touches the screen, the gap between it and the charged plate forms a capacitor. By measuring the change in capacitance at each corner of the plate, you can figure out where the touch event is occurring, and report it back to the central computer. This kind of capacitive touchscreen works, but isn’t very accurate, and can’t log more than one touch event at a time.

radardisplay

The next major event in touchscreen technology was the invention of the resistive touchscreen in 1977, an innovation made by a company called Elographics. Resistive touchscreens work by using two sheets of flexible, transparent material, conductive lines etched onto both, in opposing directions. Each line is given a unique voltage, and the computer rapidly alternates between testing the voltage of each sheet. Both of the sets of lines (horizontal and vertical) can be tested for voltage, and the computer rapidly alternates between feeding current to the horizontal and testing for current in the vertical, and vice-versa. When an object is pressed against the screen, the lines on the two sheets make contact, and, the voltages provided by both combinations tell you which vertical and horizontal lines have been activated. The intersection of those lines give you the precise location of the touch event. Resistive screens have a very high accuracy and aren’t impacted by dust or water, but pay for those advantages with more cumbersome operation: the screens need significantly more pressure than capacitive (making swipe interactions with fingers impractical) and can’t register multiple touch events.

These touchscreens did, however, proved to be both good and cheap enough to be useful, and were used for various fixed-terminal applications, including industrial machine controllers, ATMs, and checkout devices. Touchscreens didn’t really hit their stride until the 1990s, though, when mobile devices first began to hit the market. The Newton, the first PDA, released in 1997 by Apple, Inc. was a then-revolutionary device that combined a calculator, a calendar, an address book, and and a note-taking app. It used a resistive touchscreen to make selections and input text (via early handwriting recognition), and did not support wireless communication.

NewtonPDA

The PDA market continued to evolve through the early 2000s, eventually merging with cell phones to become the first smartphones. Examples included the early Treos and BlackBerry devices. However, these devices were stylus dependent, and usually attempted to imitate the structure of desktop software, which became cumbersome on a tiny, stylus-operated touch screen. These devices (a bit like Google Glass Google Glass Review and Giveaway Google Glass Review and Giveaway We were lucky enough to get a pair of Google Glass to review, and we're giving it away! Read More today) were exclusively the domain of power-nerds and businesspeople who actually needed the ability to read their email on the go.

That changed in 2007 with the introduction of the iPhone that you just watched. The iPhone introduced an accurate, inexpensive, multi-touch screen. The multi-touch screens used by the iPhone rely on a carefully etched matrix of capacitance-sensing wires (rather than relying on changes to the whole capacitance of the screen, this scheme can detect which individual wells are building capacitance). This allows for dramatically greater precision, and for registering multiple touch events that are sufficiently far apart (permitting gestures like ‘pinch to zoom’ and better virtual keyboards). To learn more about the operation of different kinds of touchscreens, check out our article on the subject What Are the Differences Between Capacitive & Resistive Touchscreens? What Are the Differences Between Capacitive & Resistive Touchscreens? Modern touchscreen devices come in two forms: the capacitative touchscreen and the resistive touchscreen. Here are the pros and cons of each kind. Read More .

The big innovation that the iPhone brought with it, though, was the idea of physicalist software. Virtual objects in iOS obey physical intuitions – you can slide and flick them around, and they have mass and friction. It’s as though you’re dealing with a universe of two dimensional objects that you can manipulate simply by touching them. This allows for dramatically more intuitive user interfaces, because everyone comes with a pre-learned intuition for how to interact with physical things. This is probably the most important idea in human computer interaction since the idea of windows, and it’s been spreading: virtually all modern laptops support multi-touch gestures How To Easily Activate Two Finger Scroll In Windows Laptops How To Easily Activate Two Finger Scroll In Windows Laptops Read More , and many of them have touchscreens Will Your Next Laptop Have A Touchscreen? [MakeUseOf Poll] Will Your Next Laptop Have A Touchscreen? [MakeUseOf Poll] Up until recently, touchscreens were mainly seen on smartphones and tablets. The launch of Windows 8, coupled with the advent of ultra-light Ultrabooks, made it more logical for laptops to come with touchscreen. But is... Read More .

Since the launch of the iPhone, a number of other mobile operating systems (notably Android and Windows Phone) have successfully reproduced the core good ideas of iOS, and, in many respects, exceeded them Upgrade To Windows Phone 8.1 & Enjoy A New App Store Interface! Upgrade To Windows Phone 8.1 & Enjoy A New App Store Interface! One of many changes in the Windows Phone 8.1 Upgrade is the overhaul of the app store. This improvement makes managing your apps far easier, as you will see in a moment. Read More . However, the iPhone does get credit for defining the form factor and the design language that all future devices would work within.

androideatingapple

What’s Next

Multi-touch screens will probably continue to get better in terms of resolution and number of simultaneous touch events that can be registered, but the real future is in terms of software, at least for now. Google’s new material design initiative is an effort to drastically restrict the kinds of UI interactions that are allowed on their various platforms, creating a standardized, intuitive language for interacting with software. The idea is to pretend that all user interfaces are made of sheets of magic paper, which can shrink or grow and be moved around, but can’t flip or perform other actions that wouldn’t be possible within the form factor of the device. Objects that the user is trying to remove must be dragged offscreen. When an element is moved, there is always something underneath it. All objects have mass and friction and move in a predictable fashion.

In a lot of ways, material design is a further refinement of the ideas introduced in iOS, ensuring that all interactions with the software take place using the same language and styles; that users never have to deal with contradictory or unintuitive interaction paradigms. The idea is to enable users to very easily learn the rules for interacting with software, and be able to trust that new software will work in the ways that they expect it to.

On a larger note, human-computer interfaces are approaching the next big challenge, which amounts to taking the ‘screen’ out of touchscreen — the development of immersive interfaces designed to work with VR and AR platforms like the Oculus Rift (read our review Oculus Rift Development Kit Review and Giveaway Oculus Rift Development Kit Review and Giveaway The Oculus Rift has finally arrived, and is making heads turn (literally) all over the gaming community. No longer are we confined to to peering through a flat window into the gaming worlds we love... Read More ) and future versions of Google Glass. Making touch interactions spatial, without the required gestures becoming tiring (“gorilla arm”) is a genuinely hard problem, and one that we haven’t solved yet. We’re seeing the first hints of what those interfaces might look like using devices like the Kinect and the Leap Motion (read our review Leap Motion Review and Giveaway Leap Motion Review and Giveaway The future is gesture controls, they would have us believe. You should all be touching your computer screens, waving your arms around in front of your Xbox, and waggling your way to virtual sports victory.... Read More ), but those devices are limited because the content they’re displaying is still stuck to a screen. Making three dimensional gestures to interact with two dimensional content is useful, but it doesn’t have the same kind of intuitive ease that it will when our 3D gestures are interacting with 3D objects that seem to physically share space with us. When our interfaces can do that, that’s when we’ll have the iPhone moment for AR and VR, and that’s when we can really start to work out the design paradigms of the future in earnest.

The design of these future user interfaces will benefit from the work done on touch: virtual objects will probably have mass and friction, and enforce rigid hierarchies of depth. However, these sorts of interfaces have their own unique challenges: how do you input text? How do you prevent arm fatigue? How do you avoid blocking the user’s view with extraneous information? How do you grab an object you can’t feel?

These issues are still being figured out, and the hardware needed to facilitate these kinds of interfaces is still under development. Still, it’ll be here soon: certainly less than ten years, and probably less than five. Seven years from now, we may look back on this article the same way we look back on the iPhone keynote today, and wonder how we could have been so amazed about such obvious ideas.

 Image Credits: “SterretjiRadar”, by Ruper Ganzer, “sin-gular”, by Windell Oskay, “Android eating Apple”, by Aidan

  1. Dick
    August 2, 2014 at 2:52 pm

    I remember using my first touch screen product - an HP desktop computer/terminal in 1979 or 80. I also remember making a bet with my boss that computers would be on every manager's desk within ten years. I was wrong - almost everyone had one by the 1990's, not just managers!

  2. Jessica C
    August 1, 2014 at 8:09 pm

    Some really interesting points here, Andre. I had never considered the challenges of resistive touchscreens, that they don't afford gestures like swiping very well.

    It also amuses me how we find the spatial gestures (the ones where you move your arm through the air to swipe through views on screen for example) like in the movie Minority Report so much fun to watch, but we rarely think about how exhausting they must be to do. They add a lot of motion and dynamism to the world of computing (great for hollywood visual effects), but aren't necessarily practical.

  3. Edward V
    August 1, 2014 at 3:58 am

    I just want to say : human are so smart!! I cannot image the world of future!! How many things that were impossible become reality!!

    • DingusKhan
      August 1, 2014 at 3:01 pm

      Well, some humans.

  4. David R
    August 1, 2014 at 12:02 am

    I like this articles which takes us to the past and make us (re)think and compare where we are and where we were.

Leave a Reply

Your email address will not be published. Required fields are marked *