Designing bots

5 questions for Desiree Garcia: Moving beyond building features and solutions to products and experiences.

By Mary Treseler
December 15, 2016

I recently asked Desiree Garcia, designer at IBM, to discuss her experiences in designing for bots, balancing listening to your gut versus listening to stakeholders, and why content-first design is a must. At the O’Reilly Design Conference, Desiree will be presenting a session, Bots may solve some of our problems; here’s how they’ll put us on the hook for others.

You’re presenting a session on building a bot with IBM Watson. What are some of the new challenges bots present for designers?

We designers love to talk about empathy all the time and what that means. For creating AI bot technology and bots using AI, I think there’s more to building empathy for the user than creating a convincing dialogue, or coming up with ways to help with user frustration if the bot isn’t perfect. The value of using an AI technology like Watson means you can tackle increased complexity (so, solving problems beyond ordering pizza). That requires a designer to take on a systems-thinking mentality in addition to an empathic one.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

A designer who thinks in systems will get to know their users’ problems better and will be able to see the point where the bot technology won’t be able to solve problems anymore because a problem may exist beyond the screen. A designer who thinks in systems can also understand how a bot experience can be used to go beyond the screen on purpose; we’ve been hearing about the impact of those design decisions this month.

So, the challenge to any designer considering working on a bot starts out with mapping the full context of a problem. Next, it’s whether or not they want to identify or solve adjacent or leftover problems, leave it up to the client, or just let them go altogether. Depending on the nature of the problem being solved, the choice may be more or less complicated. Sometimes it brings up some big questions about what we are ultimately creating in the industry.

For me, I work for IBM Watson, which is not just a leading AI company, but one that targets businesses, and therefore a broader range of problems that are very wicked indeed. I was also hired, ultimately, through IBM Design, whose mission is to transform the company through design culture and move it beyond a mentality of building features and solutions toward products and experiences.

So, I have an added challenge to create things that reflect well and instill confidence in the company’s technology, but also communicate to the broader design community on what AI can and can’t do today. It would be wrong for me to design a bot that is super-realistic, impressive, and charming, but doesn’t actually solve problems. It would also be wrong for me to design a bot that solves problems but doesn’t embody product design gold standards. In either case, the design community wouldn’t learn anything about designing with AI, and I would shortchange my employer on why they should even work with designers.

Since artificial intelligence is still young, I think there’s a need to have a point of view, to note the ways we may be setting precedent for product design throughout the industry, and to know how to articulate it inside our multidisciplinary teams and throughout the broader design community.

In your talk, you plan to cover why you chose a path to impress a stakeholder, without considering the context. Can you share a bit more about what happened?

Ultimately, this all was a lesson in design leadership. I let myself be affected by the opinion that designers aren’t technical people, or that designers aren’t business people.

One of the strategies the design team engaged in was to introduce people to this idea of sample applications for realistic use cases. That’s a good goal, but there wasn’t alignment on who our user was. Traditionally at IBM, these projects were created for specific potential clients, with the goal of landing a deal. You led with the technology in a rough prototype, and you sold the value to the customer in a meeting. The value of design that our in-house client, so to speak, was expecting, was a prettifying effect.

That’s why, when I designed an interface that showed a Watson bot troubleshooting people’s problems with Nike+ products on Twitter, it seemed pretty straightforward. I helped design a bot  simulation (meaning that it read Tweets but didn’t actually tweet back to those users) that could respond with fixes if they were 140 characters or less, could provide a link to a knowledge base, or could get an imaginary human to help if it was unable to solve the problem. I was able to show the team how they could leverage things in a user’s Tweet feed, like their tone of voice, personality, emotion, or interests, to tailor responses or hold conversations.

Somewhere down the line, the product owner decided that it would be cool if we could release this app for developers looking to tinker with Watson for the first time, and that we should generalize the app—so, instead of handling issues specific to a device, it would just handle all problems on Twitter. If you’re familiar with the types of things people report on Twitter, you’d know that they’re sometimes really serious, and sometimes beyond the scope of what social media companies can solve.

The new prototype we had on our hands brought these situations to the fore, and it painted our bot, and our technology, in a bad light. It showed people submitting questions about passwords which were responded to easily. But we also saw people reporting profiles that were stalking them and threatening to dox them, or people trying to flag hate speech and terrorist accounts. The knee-jerk reaction on the team was to censor content out for the sake of releasing the app as a tinkering resource, but the volume of those types of reports is so large that it resulted in a lackluster app.

Part of it was this misconception, even inside the company, that AI out of the box is perfect; in most cases, especially in domains that have grey areas, a human has to train the bot; we didn’t do that here. That’s something that the design team knew, but I didn’t position strongly enough as a risk; again, this was just a sample project for developers to tinker with.

But that in itself is an issue. We failed to account for the needs of potential end users on Twitter, but also our own core users—developers. The business assumed that we could hand them a fake app with a bunch of different APIs and they’d figure out the rest, and build fabulous multi-tasking bots that could solve complex problems. But as designers, we knew that we needed to show them the things unique to developing with AI, and the role of the human-AI training relationship that’s needed to really solve an end user’s problem before setting it free. You could say that for AI, developers teach designers, but designers teach developers back.

So, in the end, at best, a developer would see our app and be unimpressed with its abilities; at worst, they’d assume that failing to acknowledge the gravity of some of the problems people report on Twitter is okay. We just set a bad example.

As a broader team, the root of this issue went back to an old habit of building specific technical demos to land deals with specific companies; that’s very different from creating work that is public and is trying to court the developer community. It’s a disagreement on who the user is, how many users we have to design for, and what the value of design work is.

You note that this project reminded you of the importance of designing with content first. Can you elaborate?

When I used to work mostly on websites, the mantra of content-first was really motivating and easy to put into practice. When I started working on Watson and saw other designers build sample bots for developers to use as resources, I thought about content-first, but I limited my thinking to the bot itself—how should the bot talk? I thought about the body of knowledge that a bot would have to work with to answer questions, but I and other on the team assumed that the NLP algorithms would be able to figure out most of the conversation. When I thought about the user, I thought about what I could do to keep their attention—again, I made it about the bot, and not about them. To me, that was the bulk of solving the design problem.

What I learned the moment we unplugged the Nike+ feed and plugged in the Twitter support feed was that content was going to tell you how well you would solve a problem with your perceived solution. I totally understand that bots are trendy, everyone wants to show they can make one, make them adorable, or as realistic as possible. But in solving real problems, this is a classic case of how it’s not smart to start with a solution, and then force the user to adapt. The gravity of the content we had in this case was very helpful in illustrating how designing with content-first was showing a service problem, not a technology problem.

What other lessons or reminders did you discover along the way?

IBM Design Thinking has this mantra we call The Loop—continuously observing, reflecting, and making when creating products and experiences. In this mindset, we used a content-first approach to make a prototype that let us observe powerful insight. We reflected on it, but we didn’t make anymore after that.

Adopting a solid design process is tricky and uncomfortable for a lot of tech companies, especially if they’re racing each other to be “first,” but it’s especially important for AI. A lot of the things we are pioneering spend most of their time in the R&D phase and then hit users too fast. But designers are not just comfortable with process, but they thrive and offer their best value when they can do things like frame problems for a team, prototype solutions with them, and advocate for users.

At IBM, there are two things that I’ve learned about the designers who work on Watson: they are definitely technical people, and they definitely understand how design is good business. We understand the precedent we can set for AI—anywhere from how design can inform machine learning algorithms to issues of diversity and inclusion—and we’re hungry to lead.

You’re speaking at the O’Reilly Design Conference in March. What sessions are you interested in attending?

I always end up wanting to see talks that are scheduled at the same time! I’m already bummed that I won’t be able to see some of these. I’m the type of designer who falls in love with problems more than solutions, so anything that helps me understand our users at Watson and what they’re trying to solve. For example:

The future will see you now

Rethinking design tools in the age of machine learning

Designing good robots

When your internet-connected things know how you feel

Designing smart things: balancing ethics and choice

Designing conversational voice user interfaces

I’m a career-switcher, so I was not initially trained to be a designer; I was self-taught over several years. The day I decided to go for it and apply for a design job, I was actually at a meetup where Dan Mall was speaking, so I’m going to his talk just to have a way of reflecting on the course I’ve taken with my own career. Similarly, since I learned a lot about the type of design leader I want to be as a result of leading this project, I want to go to Aaron Walter’s talk on design leadership. I’m also definitely going to the NASA talk because, NASA.

Post topics: Design
Post tags: 5 Questions for designers
Share: