Last night, I went to a talk given at Cooper, which is regarded by many as a world class interaction design studio. The subject of the discussion was based on a blog post that recently got quite a bit of social airplay, called “The best interface is no interface“.
We’re at an interesting juncture in technology with people using software on their smartphones, in cars, and now even in home appliances. A recent Intel study suggested that the cost of computing will decrease so much that by around the year 2018, it will be economically feasible to put a reasonably powered microcomputer inside most small appliances.
And of course, with this new power, comes great responsibility for designers. Commercial schemes to capture our intent, advertise to us, and make us somehow aware of a new “thing” at any given time will only increase. Okay, maybe that’s a bit cynical, but at the very least, it’s highly likely that if computing power continues to surround us, we will also be surrounded by more gestural and certainly tactile interfaces. And, for an interaction design studio to say that the best interface is no interface, is a bit of a coup… But, as I had suspected, there were many mixed feelings held by the attendees — Luckily, the parlor had a diverse crowd of people with commercial intent (eCommerce vendors) to designers who felt that “no interface” or even “minimal interface” is exactly the wrong thing that’s required in many instances.
An example that Don Norman mentioned was a comparison between a simple silversmithing hammer to Photoshop. Someone asked why Photoshop isn’t more simple like the tool, but Don keenly pointed out that it’s absurd to try to compare one tiny tool to the entire toolbox that PhotoShop intends to be. That might seem like an obvious comparison, but I think there’s a tendency to make gross generalizations about the necessity of UIs for certain use cases.
Toward the end of the evening, people seemed to come to a consensus on the idea that there are two broad categories of products: Disabling technologies (the ones that we generally try to have fade in the background: e.g. CA’s FastTrack, an elevator button), and Enabling technologies (the ones that we try to surface the features of to users to enable them to do more with them: eg. PhotoShop, GarageBand). This dichotomy feels right, but I think that reality is a lot more nuanced. What I’m the most interested in, is how this nets out for a product-focused startup as you decide what to build and who to build for. I’ll be thinking about this a lot in the coming months.
Another topic that came up during the evening was one of “control” which I’m interested in learning more about. Don Norman brought up Nest (I don’t know why everyone loves that thing so much, I should check it out) and talked about how it had bad interaction design (he had clear examples why). Nest is a thermostat, typically thought of as a product that “gets out of the way” of users. So, you’re giving up a small amount of control for what’s actually happening in exchange for intelligent defaults and settings that if you really want to, you can happily change, but now you’re left fiddling with an interface with minimal surface area that takes a while to “set up” the way you want.
Related to control, at my startup Lizi, we created a 1-click scheduling tool that’s a great alternative to Tungle.me. We have a belief that people will happily trade away some control if Lizi solves the “paradox of choice” problem of choosing times from people’s busy schedules. We’re realizing that not all users work this way — Many would prefer the control that it offers, even though it might mean interacting with 2 or more extra clicks. It’s a fascinating discovery for us, to say the least.
Notably absent during the entire evening was an idea I’m keen to explore: The role of anthropomorphic agents in our daily lives. In 2012, we’re finally in a position where people understand the abstraction of “apps” because of Apple’s marketing dollars. Taken to the next level, anthropomorphized agents (Siri, et al) enable a simple abstraction that is far more powerful in the “No UI” debate. We’re spending so much time programming machine learning, heavy AI, and “collective intelligence” based systems, learning and adapting to people’s needs. People aren’t in a mindset right now that their tools… their apps actually learn. But when they think of Siri (or what Siri could have been if executed better), I think people’s minds are much more malleable. We say “Siri is getting smarter” or “Siri can do X now”… We don’t think of “Apps” as having intelligence in the same way, we think of them as “dumb” interfaces that we still have to train and teach.
Golden and team at Cooper went as far as to provoke the thought “The Best UI is no UI”. The conversation continues over at this Branch.
Respond to me on Twitter: @AshBhoopathy or follow the discussion on HN.