On making something people want

Make Something People want

This post is part of this week’s Startup Edition series to answer the question, ‘How do you discover what people really want?’

As a designer and entrepreneur-in-action at a venture firm, like many of you I’ve had myriad projects in the past. However, they haven’t always resulted in commercial success. It had nothing to do with growth hacking, customer acquisition cost, or any other stratagem du jour that you might read about on a top ten list of an SEO blog. The core issue was a lack of focus on the elusive goal of ‘making what people want’. But over the years, I’ve learned and honed tools that improve my chances of creating killer products with great user experiences that people love share with their friends.

Roughly speaking, developing your product-people-want is broken into two parts. The first part is you having sufficient insight into the problem. The second is some creative problem solving and “tinkering” once you are armed with the right data. In this multi-part post, I’ll explain these two aspects and the glue that stitches them together (arguably the hardest part!).

So how do we, collectively as builders, make products people want? Some say you have to “design for yourself” and “scratch your own itch”. On the one hand, this seems solipsistic – Does the universe of successful products only consist of tools that people have wanted themselves? The answer is clearly ‘of course not’.

The microsecond you start thinking about building a tool, product, service, utility, or an experience for someone else, you are already at a handicap. Your own cognitive biases shape your understanding, and its hard to understand someone else’s frame of mind. So what do you need to do?

You proverbially “get out of the building”, talk to your prospective users, and ask about their needs or obstacles. Often this research is fertile, exploratory, and nuanced, especially when you are inexperienced in your target user’s field. For example, if you are building a product for a salesperson, and you have never spent a day selling in your life, then you will spend lots of time playing catch up. Every anecdote or story that you hear will mold your mental image of this person. During this phase, you should think like an actor who’s trying to “get into the character” whom they’re trying to portray.

The next step is where I see lots of untrained research practitioners flounder. By now, you’ll have a great set of user data to design with. But data alone is abstract, nebulous, and decontextualized. You need shorthand to communicate your ideas about user needs succinctly. It is important that the rest of your team feels empowered to 1) understand the problems of your user and 2) imagine/create solutions to address these problems.

The reason why this juncture in the project is so critical is that it sets the stage for all of the work that follows it. If there is misinterpretation, you’ll fall into traps like 1) building too literally to the ‘user-need’ spec and missing opportunities for creative improvisation or 2) failing to prioritize appropriately and building the wrong things first and expending any remaining “energy in the developer fuel tank”.

One common approach is to create a geometric “average” of the types of people that you are trying to build for. It’s a well known process called creating ‘personas’. These are representations of characteristics of a person in a nice, tidy package. It allows you keep in your mind an abstraction that there exists a ‘certain’ individual that behaves in ‘certain’ ways, and has a ‘certain’ set of needs. The problem with personas alone is that most of the nuanced and fertile stories that you’ve heard in talking to all of those sales people gets lost along the way. Let’s say a salesperson that you were conversing with discussed how he/she missed a deal because he/she didn’t have access to an important purchase order revision at the Cleveland airport on the last day of a quarter before a clients budget closed and on top of that, this individual’s phone battery was at 5%. No matter how hard you try, even the most well developed personas couldn’t capture this level of pain with any precision.

Although personas are frequently utilized and can be valuable tools to understand your users, they have their shortcomings. Stay tuned here and follow @AshBhoopathy to learn other more effective techniques at communicating what you’ve learned.

When do you use UI walkthroughs?

Recently, quite a bit of ink has been spilled in the blogosphere about UI Walkthroughs. I thought I’d do my part in trying to understand and come up with a better framework or list of heuristics to think about before deciding on whether or not to ship a walkthrough with a product:

I responded to this question on Quora, but I thought I’d include the post here because it might be helpful to more people deciding whether or not they want to include walkthroughs on their product interfaces.

John Gruber, who is a sophisticated internet user and blogger says that a “user should be able to figure out how an app works just by looking at it”: If You See a UI Walkthrough, They Blew It. To me, an overbroad statement like that is akin to saying “A user should be able to fly a plane just by looking at the controls”.. or even “A user should be able to drive a car just by looking at it”. We need to have realistic expectations of the interfaces that we use and be more precise about what we’re building if we’re ever going to evolve interaction design.

Products fall into multiple categories based on patterns of usage and intended audience.

Some products are daily/heavy use products which should optimize for the expert user. These products need to be designed such that, once the user has an understanding of how to navigate and understand the product’s functionality, they can perform regular actions with ease.

Examples: A Todo list, a weather app, or an app for sports scores and the news on a mobile phone. A POS (Point of Sale) system where the operator has some sufficient time for training [Keep in mind that POS systems are designed for fast transactions to keep lines short and moving smoothly].

Other products are used many times by different users, infrequently. These interfaces need to be designed such that they’re intuitive, require as little handholding as possible, and should offer 80% of the benefit for 20% of the effort. Additionally, that 20% of the effort should be possible by almost all of the people who enter into the experience.

Examples: a photo kiosk at the local drugstore, a fast food ordering counter with an iPad or self-checkout system at a grocery store.

What does this have to do with UI walkthroughs? Because the first class of products are not designed to be *intuitive* on first usage, they need scaffolds (extreme way of conveying this is a “crutch”) for the user to understand their operating protocol. Once the user understands how the system works, then they will be able to use the product quickly and effectively on a repeated basis.

Here are a few articles I’ve run across recently that reference the UI walkthrough debate if you’re interested in following along:

The original article that started it all.

Are UI walkthroughs evil?

– And the corresponding discussion on HN: If you see a UI walkthrough, they blew it.

– Here’s a company, Kera, that’s focused on making UI walkthroughs weighing into the discussion: Five Principles for Effective UI Walkthroughs

– and Techcrunch’s take: Rethinking The Mobile App “Walkthrough”

There’s a few interesting discussions here: the author of the first article uses strong language, suggesting “UI walkthroughs are evil” (to be fair, a later blog update clarified that not all walkthroughs are bad). The article claims are that UI walkthroughs are confusing, presented at a time when a user lacks context or won’t remember everything about the walkthrough enough to learn its facets, and that walkthrough annoy otherwise impatient users.

Most of these issues can easily be assailed by good execution: A minimal walkthrough combined with progressive disclosure as the user experiences more and more of the app through their lifetime use is a great way for the user to learn more about the product over time.

I’ve put together a few simple diagrams that might help UXers and product builders decide whether or not they want a product walkthrough. And if you need some help building your 1UX or product walkthrough, feel free to ping us at LiziLabs.

We won RailsRumble

We (@ashbhoopathy, @railsjedi, and @richlengsavath) won both the Public Favorite and 3rd place in this year’s Rails Rumble.  We are flattered and honored by the outpouring of community support for our simple idea.

When we started building DeployButton, we wanted to scratch our own itch and solve for our own needs.   This summer, our team at Lizi decided that we needed a less expensive web host for our site(s).   Like many of you, we’re a fast moving consumer web team that likes to iterate and put lots of new things out to “Build“, “Measure“, and “Learn” to validate that we’re making something people want™.  


Jacques, aka “@railsjedi“, created a really sweet system to power our continuous deployment, using Github, Linode, and Opscode Chef.   I’ll save the more technical post for later, but for the layperson, this essentially means that as soon as someone has committed working code into a master branch of a source code repository, it automatically gets deployed to the server, and the whole team is notified.  

Our team relies on a few tools for group communication, one of them being Hipchat.   Hipchat allows for  deploy hooks that can notify us when different things happen, like code being checked in, and deploys starting/completing.   This is a great way for the team to 1) stay abreast of what’s going on with the code base, and 2) know which “application state” users are looking at in the event that an error occurs (Errors also have http hooks that notify us in the chat).  

Little did we know, but the night that we finished and submitted our final version to the RailsRumble repo, we got to the front page of HackerNews.   Soon thereafter, we had over 15K visitors come to our site, and over 6K who’ve used the product already!  We knew then that our product had struck a nerve and might fulfill a need for a wide assortment of people: independent WordPress builders, small-midsized web consulting shops, to weekend hobbyist Rails devs.

Over the next few months, we’ll be improving DeployButton to have many creature comforts that we’d want to see in a product like this, since we need it anyway.  

Follow along in our progress here, @DeployButton, and tweet to tell us how you use continuous deployment at your startups and enterprises.

Oh, and if this is still “cool” to do, Like us on Facebook and we’ll let you know first about beta releases to our product :-)

Cooper’s “No interface” Parlor

Last night, I went to a talk given at Cooper, which is regarded by many as a world class interaction design studio.  The subject of the discussion was based on a blog post that recently got quite a bit of social airplay, called “The best interface is no interface“.   

The best interface is no interface

We’re at an interesting juncture in technology with people using software on their smartphones, in cars, and now even in home appliances.   A recent Intel study suggested that the cost of computing will decrease so much that by around the year 2018, it will be economically feasible to put a reasonably powered microcomputer inside most small appliances.  

And of course, with this new power, comes great responsibility for designers.  Commercial schemes to capture our intent, advertise to us, and make us somehow aware of a new “thing” at any given time will only increase.   Okay, maybe that’s a bit cynical, but at the very least, it’s highly likely that if computing power continues to surround us, we will also be surrounded by more gestural and certainly tactile interfaces.   And, for an interaction design studio to say that the best interface is no interface, is a bit of a coup… But, as I had suspected, there were many mixed feelings held by the attendees — Luckily, the parlor had a diverse crowd of people with commercial intent (eCommerce vendors) to designers who felt that “no interface” or even “minimal interface” is exactly the wrong thing that’s required in many instances.

An example that Don Norman mentioned was a comparison between a simple silversmithing hammer to Photoshop.   Someone asked why Photoshop isn’t more simple like the tool, but Don keenly pointed out that it’s absurd to try to compare one tiny tool to the entire toolbox that PhotoShop intends to be.  That might seem like an obvious comparison, but I think there’s a tendency to make gross generalizations about the necessity of UIs for certain use cases.  

Toward the end of the evening, people seemed to come to a consensus on the idea that there are two broad categories of products:   Disabling technologies (the ones that we generally try to have fade in the background:  e.g.  CA’s FastTrack, an elevator button), and Enabling technologies (the ones that we try to surface the features of to users to enable them to do more with them:  eg.  PhotoShop, GarageBand).   This dichotomy feels right, but I think that reality is a lot more nuanced.  What I’m the most interested in, is how this nets out for a product-focused startup as you decide what to build and who to build for.   I’ll be thinking about this a lot in the coming months.

Another topic that came up during the evening was one of “control” which I’m interested in learning more about.  Don Norman brought up Nest (I don’t know why everyone loves that thing so much, I should check it out) and talked about how it had bad interaction design (he had clear examples why).   Nest is a thermostat, typically thought of as a product that “gets out of the way” of users.   So, you’re giving up a small amount of control for what’s actually happening in exchange for intelligent defaults and settings that if you really want to, you can happily change, but now you’re left fiddling with an interface with minimal surface area that takes a while to “set up” the way you want.

Related to control, at my startup Lizi, we created a 1-click scheduling tool that’s a great alternative to Tungle.me.  We have a belief that people will happily trade away some control if Lizi solves the “paradox of choice” problem of choosing times from people’s busy schedules.   We’re realizing that not all users work this way — Many would prefer the control that it offers, even though it might mean interacting with 2 or more extra clicks.  It’s a fascinating discovery for us, to say the least.

Notably absent during the entire evening was an idea I’m keen to explore:  The role of anthropomorphic agents in our daily lives.  In 2012, we’re finally in a position where people understand the abstraction of “apps” because of Apple’s marketing dollars.   Taken to the next level, anthropomorphized agents (Siri, et al) enable a simple abstraction that is far more powerful in the “No UI” debate.  We’re spending so much time programming machine learning, heavy AI, and “collective intelligence” based systems, learning and adapting to people’s needs.   People aren’t in a mindset right now that their tools… their apps actually learn.   But when they think of Siri (or what Siri could have been if executed better), I think people’s minds are much more malleable.   We say “Siri is getting smarter” or “Siri can do X now”… We don’t think of “Apps” as having intelligence in the same way, we think of them as “dumb” interfaces that we still have to train and teach.

Golden and team at Cooper went as far as to provoke the thought “The Best UI is no UI”.  The conversation continues over at this Branch.

Peter Thiel’s class: Notes about Artificial Intelligence

Continuing on the theme of artificial intelligence from the David Eagleman post…

Our team at Lizi, which is somewhat obsessed with these great notes on Peter Thiel’s class by Blake Masters, was talking about the class dedicated on AI recently. There are unquestionably great points raised in this talk:

It might still be too early for AI. There’s a reasonable case to be made there. We know that futures fail quite often. Supersonic airplanes of the ‘70s failed; they were too noisy and people complained. Handheld iPad-like devices from the ‘90s and smart phones from ’99 failed. Siri is probably still a bit too early today. So whether the timing is right for AI is very hard to know ex ante.

We can look at Siri today, and most of us would agree she’s a “parlor trick” at best. Siri can’t do most of the things that we want her to do, despite having one of the most powerful design and marketing muscles ever known in commercial history backing her up.

I am interested in the comparisons between searching for artificial intelligence and trying to get an airplane to fly in the early part of the 20th century. Again, here are some winning quotes:

Re: Airplanes in 1900… “People have been trying to build flying machines for hundreds of years and it’s never worked.” Even right before it did happen, many of the smartest people in the field were saying that heavier than air flying machines were physically impossible.

Scott Brown: Part of it is about about process. What enabled the Wright brothers to build the airplane wasn’t some secret formula that they come up with all of a sudden. It was rigorous adherence to doing carefully controlled experiments. They started small and built a kite. They figured out kite mechanics. Then they moved onto engineless gliders. And once they understood control mechanisms, they moved on. At the end of the process, they had a thing that flies. So the key is understanding why each piece is necessary at each stage, and then ultimately, how they fit together. Since the quality comes from process behind the outcome, the outcome will be hard to duplicate. Copying the Wright brothers’ kite or our vision system doesn’t tell you what experiments to run next to turn it into an airplane or thinking computer.

This approach, we’ve come to learn from the Lean Startup movement, is vital to discovering a business model that connects to real users who are willing to pay for the service (with money or attention). I think we’re all big believers in this movement of incrementally finding a business model. We think that the key to making AI commercially acceptable relies in getting people “ready” for it.

Why has AI progressed so slowly since the 1960’s?

David Eagleman talks about why AI has progressed so slowly since the 1960’s. I’m excited to check out his book Incognito on my next long flight.

Personally, I believe the biggest failure of artificial intelligence is the expectation set by science fiction for what AI is and what it can be… that isn’t able to be built fast enough by scientists, tinkerers, and entrepreneurs. In other words, there seems to be a wide gap between research/exploration and commercialization, mostly because expectations for what AI can do for us is massively overemphasized. It prevents us from taking the incremental approach that’s a vital part to building companies in this decade (and beyond?).

In the coming months, I’m going to be dedicating more blog articles exploring the commercialization of AI in its various forms.

The new Digg: Good and Bad

Often times when sites launch I like to evaluate them for user utility, aesthetic appeal, and monetization potential. I mostly like what the hackers at the “new” Digg have done with the platform. It’s clean, allows for personal expression of my own “Diggs”, and is sufficiently social. Here’s what I like about it from a product perspective.

Digg popular

Digg save to iphone

  1. Seeing social/twitter content inline with the  actual articles works nicely. I’m assuming that if I logged in, it’d show me people that I’m actually connected to, which is much more valuable than retweets from others.
  2. These graphs are overdone. Like many analytics products, Digg assumes that appealing to an excess of data visualized along a graph is useful for people. It’s a nice/pretty graph, but who really cares when this has trended over the last day? So, this is just noise in the long term.
  3. The new Digg score is a combination of Facebook “likes”, Twitter tweets, and Digg upvotes. This aggregate is generally better than just a Digg score, but it made me think of how useful this would be if the up votes were from communities that I legitimately cared about. Maybe I don’t care how many people have tweeted something, but I care a lot about how many comments something received on HackerNews?
  4. This is by far the best interaction metaphor and part of the UX of the new Digg. It’s evocative of my favorite iPad and mobile app, Instapaper. In fact, it combines the two of the most powerful ideas on the web: Exploration/Discovery combined with Self curation and tees up content in a clean format to read later
  5. Finally, there’s the iPhone app. This is a pretty barebones app, I’m not really impressed, but it’s great for less than 6 weeks of work. I didn’t spend any time synchronizing apps to see how long it would take, but this is the most important thing here clearly (just like the rolling background synch done by Instapaper). I think the Reading List component is probably the most important feature here — I tend to be a consumer, but not active participant on mobile. I think other people participate more on mobile, so maybe making top stories more prominent takes precedence for them. In their iPhone app screenshots on the iTunes download page, Betaworks should clarify the Exploration-> Reading on iPhone workflow rather than showing multiple screenshots of a Snoop Digg article that gives me very little context for what the product can do for me.
  6. Overall, the interface is nice and clean, pretty barebones, and allows for expression. Not a ton of viral features for a site that arguably “invented” virility. I don’t like that I have to sign in with Facebook (at least provide people with a Twitter option, and G+ for extra credit). Not bad for six weeks of work.

Digg iphone

Congrats to the Betaworks team for shipping fast and learning from it.

Accelerators vs Apprenticeships

Apprenticeships used to be the way that people became masters at anything. Master craftsmen, master artisans, master chefs, and master sushi makers. This story on NPR today reminded me of the importance of apprenticeships, internships, and on the job training to further career progress. It wasn’t until reading this story that I thought about the relationship between apprenticeships and the batches of graduates from “accelerator” programs.

Today, accelerator programs are largely exclusive. In fact, their popularity has risen in part to their exclusivity and privilege. The exclusivity stems from the fact that after the period of “acceleration” is over, the ability to raise capital is much higher. After the apprenticeship, the apprentice just has a “job”, most likely for the master. (more…)

Hello, Computer

You bleed when you’re on the “cutting edge”, especially when it comes down to user adoption of technology. It’s telling when established companies like Apple, who are considered “masters” at market timing get it slightly wrong. Releasing a voice activated “assistant” product that users seem disenchanted with is okay to do as a test when you’re a large company with diverse revenue streams.

It’s not ok as a lean startup when you have limited resources and not much time to test your assumptions with users. I wonder sometimes if the world is really ready for voice activated control of their world. We can’t all be Scotty.