Join our newsletter
Thank you for
From now on you will get updates and news from us.
As you’ve probably noticed, Machine Learning and Artificial Intelligence are here to stay and will continue to disrupt the market. Many products have inherently integrated AI functions (i.e., Netflix’s suggestions, Facebook’s auto-tagging, Google’s question answering), and by 2024, 69% of the manager’s routine workload, will be automated, as Gartner forecasts.
A lot of work has been done around designing products that make AI accessible for users, but what about designing a product that improves the AI model? How does UX approach the development of better AI?
I’ve always been very excited about AI, and for the past couple of months, I’ve been working on the Product Management and UX of several highly technical and advanced AI products. In my experience, bridging the gap between the science behind Machine Learning(ML) and the end-user is a real challenge, but it’s crucial and valuable. Humans have a huge responsibility when it comes to teaching the different models — it can either turn into something great or go horribly wrong.
In this article, I will focus on the two sides of an AI product, and then combine them into one approach that will benefit both the end-user and the ML model.
So, first, let’s focus on the two sides of the experience:
After becoming familiar with these, I’ll combine them into one Machine Learning Experience — Model-User Design.
User-centered design is the shared goal of everyone interested in UX. If the product is centered around a real user’s needs, it is far more likely to create a product-market fit and generate happy customers.
AI is pretty new to people. Many people are afraid of it for many reasons — from giving false predictions to taking away their jobs (not to mention their lives, but that’s some Terminator stuff). That’s why creating a good experience for the user is crucial.
There are a couple of tools we can use in order to create a good experience in AI products. We’ll cover some of them, including finding the right problem to solve in order to provide value, how to explain the model running “under the hood”, keeping the user involved in the learning process and preparing for mistakes.
The basic rule of product-market fit, which applies to all other products, applies to AI. For the product to succeed, a real problem needs to be solved. If we create the most complicated state-of-the-art AI product that predicts the flying route of a fly, that would be a great model, but no problem is being solved and no value is being created. AI should add value to users and optimize the way they work.
“The only reason your product should exist is to solve someone’s problem.”— Kevin Systrom, Co-Founder of Instagram
Explainable AI explains what AI does to the user. The user has the right to understand why the algorithm predicted something. Explaining the why creates a more reliable connection and a feeling of trust. There are many examples such as product content suggestions on Netflix and YouTube — “Because you liked X:”, or “Based on your watch history:”.
These sentences make you understand why Netflix suggested Ozark — because you watched Breaking Bad!
You should also be aware that it’s not just about the experience, but that it’s a regulation ‘thing’. GDPR includes the right of an individual to ask for a human review of the AI’s prediction, to understand if the algorithm has made a mistake.
We should keep in mind that the model doesn’t always know what’s best for the user, and that users should feel they have the power to affect the model and “teach” it. For example — create opportunities for the user to provide feedback if the prediction is right or not.
These types of messages enable feedback from the user, which will eventually help the prediction improve.
An AI algorithm won’t be 100% correct all the time. That’s why the algorithm should be able to project its confidence in a prediction —if a prediction isn’t very confident, the user should know about it and take it with a grain of salt. Also, be ready to handle mistakes and errors. The user is more likely to accept mistakes in AI if they are followed with an explanation of why the model came to its prediction (as mentioned before — explainability!). This statement should also be followed by information on how to improve the model in the future.
It’s really important to remember AI has a huge impact on people’s lives. That’s why AI models’ predictions and mistakes have a colossal effect on people’s lives — wrong predictions may be highly offensive to the user (e.g., Google’s horrible false classification) or cause physical damage and even death (e.g., accidents made by self-driving cars).
Now that we’re aligned about what user-centered design is, let’s talk about how to make the design centered around the ML model — how to improve the model and make the learning process as efficient and beneficial as possible.
When we talked about user-centered design, our goal was to make the model understand the user. Now, let’s try to make sure the user understands the model.
To make this generic and straightforward, let’s establish a very high-level flow of the machine learning process:
In order to think about Machine Learning Experience, let’s forget for a second what we know about user interface components. Let’s talk about the process and how it meets humans.
The training part of the ML model is essentially taking a lot of data and uploading it so that the algorithm can learn from it. Let’s say we want to train a model to identify lemurs in pictures. A training process can include uploading 1,000 images, some labelled and some not. Then, waiting for the model to learn. At the end of the process the model will be trained and can identify a lemur!
As users, we’d like to make sure the algorithm learned. That’s why it’s important to visualize and clarify the training process — things like the accuracy of the model, the number of epochs that it took for it to learn, etc.
Also, if we want to make sure the model works as we want it to, we can move to inference phase.
In this part, we’d like to test the understanding of the model. Inferring, to put it in very simple words, is pressing the “run” button on the AI model, with a given input. If we take the lemur example from before, at this point, we would upload a picture and check that the model understands what a lemur is and what isn’t.
After seeing the result, the user should have the ability to provide feedback, so the model will learn and improve.
In order to make sure the model is performing well, monitoring is needed. It’s essential to understand the relevant metrics in order to monitor the model well. For a deeper understanding of the subject, I highly recommend reading this article.
Now, when we know both sides of the AI-Product equation, we’re able to identify the guidelines for creating a good AI product:
When thinking about the product’s users, we need to take into consideration the ML researcher who will feed and train the algorithm. With that in mind, we have some key takeaways:
To give good predictions and provide an actual value, the top motivation for the ML researcher is to make sure the algorithm is as accurate as possible. For that to happen, we need the user to have comprehensive understanding of the model’s inputs and outputs. e.g., users should understand the importance of labelling training data and giving feedback to the predictions. The better users understand the important metrics of the model, the better they’ll be able to improve the model and get better results. In other words, in order to improve the model, users need to understand the “needs” of the model.
In order to improve the model, it’s important to make the user’s feedback as intuitive as possible and make it a big part of the user flow. There’s only so much an algorithm can understand about human needs without actual human input (imagine expecting a baby to learn how to speak without teaching it what’s right and what’s wrong).
Making users feel like they’re taking an active part in a product’s functioning is highly beneficial, for two reasons:
Extra reading on the subject can be found on this great post about the IKEA effect.
It’s a shared motivation for the model to learn from the best quality of input. A good design can encourage the user to upload high-quality inputs and remark when and why low-quality inputs aren’t good enough. e.g., a message saying the input image’s quality is too low in a way that the user understands and “believes”, therefore, wants to upload better images.
In conclusion, products are becoming much simpler for the end-user, but much more complex “behind the scenes.” As product managers, designers, and developers, we should aim for an excellent front-facing product, but also aspire to contribute to the efficiency of our product’s algorithms. That’s why we need to add the ML model to our design parameters, in any way possible.
We need to find a way to make humans understand and help AI, alongside AI helping humans.
Need to solve core business problems with customized AI solutions?
See how you can solve individual use cases or achieve a company-wide AI transformation using one platform and gain a competitive advantage.
Request a Demo