When two people have a conversation, meaning comes from both the back-and-forth between two people and the non-verbal cues they are giving (a frown, maybe, or an anxious tone of voice) as it does from the actual words used. But when we talk to machines we all too often only have the words – when we search for something we type words into a box and expect meaning to come out as a result. On the second day of SXSW Google hosted a discussion between design and research – with Hector Ouilhet, their Head of Design, and Laura Granka, their Head of UX and Research – on how we can build better relationships between humans and machines.

So the best, and most profitable, interactions we have as humans come from there three things:

1. The words that are spoken
2. The exchange of words, the back and forth between the people in the conversation
3. The non-verbal cues

This helps us to understand not just what is said but the intent. In English, for example, we ask questions that are actually not questions at all (asking a dinner companion “Can you pass the salt?” is not actually an enquiry into their ability to do this task, we just want them to give us the salt). Words alone do not make profitable human-to-human interactions. And so words alone will not make profitable human-to-machine interactions.

What is needed is a translator, a way to augment the words that we might type into a machine to help it to truly understand what we mean and what we want, and to remove any friction from this interactions. And this is where Google comes in.

The discussion between Ouilhet and Granka showed how the Google Search product has evolved over time to promote better understanding and a more profitable human-machine experience. How Google Search is the translator between them.

When search first appeared it replaced portals as our primary way of accessing information on the internet. We went from a system very much like libraries – where everything is in categories and we go to look for it- to a new more open way of finding things – a blank box for us to type into. But the way people would use search would not necessarily help them find their way to the solution they wanted.

A father looking for things to do with his young daughters on a rainy day in San Francisco, for example, would just type in ‘activities to do with my daughter’. And expect results that would inspire what they might do that afternoon. In the rain. In San Francisco. But without this knowledge – not knowing where they were, the day of the week or the weather, let alone anything about the father and daughter – how could Google deliver a search result that actually answered this father’s questions.

The development of the Google Search product has been the story of the quest to work out intent; to move from thinking of success as just the speed and quality of the search results, to realising it is the happiness of the person searching and their ability to complete the task before them.

So how can Google work out the answer to all those other elements that help to define meaning, without the father having to type them in to Google himself. And this is why search results should look different for different people, and why Google can use data it knows about you and about the environment you are in to better serve you results. If it knows, based on your previous browsing and search behviour, that you are interested in art then it might suggest certain activities over others. If it knows that it is a sunny day it might suggest outdoor activities. And if it has inferred the age of your daughter it can suggest age-relevant activities.

If Google can use data to infer the intent behind your search terms then it can deliver a more helpful set of results that are more likely to get you to the answer you need.

But it is not all about data for better understanding at an individual level. Google is currently working with user journeys to better design research. The example given in the session shows how this works. For a simple search, such as ‘US Presidential Election’, the results you want (the task you are looking to complete) would be vastly different depending on your stage in the user journey. On the evening of the vote the top search result should be the election results. That morning it should be a ‘how to find your polling place’ search result. And a few weeks earlier that same search string should take you to information about how to register to vote.

It’s only be using data at the individual level, and properly interrogating the user journey, that Google is able to deliver search results that are not just quick and accurate, but that actually help you find the answer you are looking for. Because words alone do not provide everything we need to understand what is wanted – there needs to be a translation between humans and machines that can infer the intent behind those words.

Full coverage of SXSW 2018:

SXSW Day 1: Empower Displaced People Through Technology. 

SXSW Day 3: What Does the Internet Look Like Without Screens?

SXSW Day 4: Are Driverless Cars Really a Thing?

We Love Ester Perel.

Matt Rhodes

About Matt Rhodes

Head of Digital Strategy for work. Marathon runner and charity trustee for fun.

Follow Matt Rhodes