Responsive Design & Touch Interfaces


The phrase “responsive design” has been getting a lot of attention lately. Conferences are organizing whole sessions around it, Google has come out in support of it (when it makes sense for users) and everyone is scurrying to make sure their website now fits the latest buzzword. But while responsive design may be new to some people, the concepts and technologies involved have been around for years. It also doesn’t help that, most people think responsive design just means that your website resizes itself depending on the width of your browser window, or the size of your handheld device. If that’s your understood definition, you’re missing the point.

Responsive design means a lot more.

One thing people forget about is the nature of the interface. On a computer, you have a monitor, a keyboard and a mouse. You use all of these elements to interact with the website. On a phone or a tablet there is no keyboard or mouse; you touch the screen, and therefore touch the website, directly. These are the use situations we are developing for today and this is what responsive design represents.

High level responsive design doesn’t just scale to fit your device, it takes the actual features of each device into consideration. If all your website does is resize itself down for my iPhone, that’s nice, thanks, but you’re missing a lot of other things you could be taking advantage of.

Like GPS. Or the touch interface. Or situations where you have more real estate to work with, not less. If you have a desktop computer with a 27” monitor, you should be taking advantage of that room and the implied better Internet connection. Here’s where you can use those wickedly-high resolution images that no one wants to wait for (and pay for) on a 3G connection.

Touch & The Hover State

Because many people have mobile devices now, and many mobile devices have touch screens, we need to be aware of this when we do responsive design and how it alters the experience.

Something we design for when we’re building a website is “the hover state.” This is what happens to an element when you hover over it with your mouse. You see this on links, which may change color. You see this on slideshows, which may reveal image captions. We spend time crafting these behaviors.

The thing is touch interfaces do not have a mouse, so there is no hover state.

If, for example, you are designing a navigation menu, you might have child pages appear under each parent page when the user hovers over the parent page with their mouse. This isn’t going to work on an iPhone. You’ll need to adapt for that and make sure there’s a way for all users to get to the child pages. For example, if a user taps on a parent page, the parent page could have a sub menu for the child pages. Just don’t force all of the navigation into a menu that only appears on hover.

This doesn’t mean you should ignore the hover state. It doesn’t mean you shouldn’t design that. In fact, you may want to change your point of view about the hover state. Instead of saying that the iPad doesn’t have a hover state (and is therefore lacking a feature), say that a desktop computer has a hover state (and has an additional feature). You might want to design the hover states as added features, as extra bells and whistles for the people who can support it. But don’t let crucial functionality depend on something that not all users have.

Let me repeat that: Don’t let crucial functionality depend on something not all users have.

The touch interface is more than just an organic way to click on something. We are seeing gestures unfold and, through that, gesture search. Are you allowing users to swipe through your website? How is a one-finger swipe different than a two, or three-finger swipe? What about pinching? Really, the mouse looks kind of dull, now, doesn’t it?

Now that you’re thinking about touch interfaces, let’s think about the future. Have you read anything about Google’s glasses? How will your design handle those constraints? How will it take advantage of those features? More importantly, Google Glass has no keyboard, no mouse, and no touch screen. You interact with it through voice commands. How will your design accommodate this radical shift in interfacing?

As designers and developers, we need to be thinking about things like this from the very beginning of the process. Our clients don’t always think about their users, and aren’t always aware of the variety of ways users interact with the web. Sometimes they are happy with something that works with IE8, because that’s what they use. They’re not always thinking about kids with smartphones trying to look something up in a restaurant, or guys browsing on the couch with a tablet while the TV is on. But they should be. It’s our job to make sure we’re not building something people can’t actually use.