Silicon Beach: Being Intelligently Artificial

9th May 2017
When I sat down to write Hippo last year, I focused a lot of the content on the theme of Artificial Intelligence. If you’ve read the first chapter — Accidental Polymath— you’ll know that since the very start of my career I’ve always had a slant towardsdata informed product design. So it wasn’t a huge surprise that I ended up gravitating towards the themes that I did. The actual phrase “Artificial Intelligence” is invoked as if its meaning is self-evident, but it has always been a source of confusion and controversy, which is why I prefer referring to smart technology as “Intelligently Artificial” instead.
 
I wanted to write a book about our relationship with technology and ask the big questions about how machines and human can collaborate and communicate better with each other. Which, in a lot of ways has been the role of the interface and service designer over the last 50+ years — to make the machines easier to use in order to help people fulfil a certain set of tasks. The conclusion in the book though is pretty clear —When a machine understands human language, we simply don’t need all the Screens, Buttons and Clicks anymore.
 
We’re entering a time of folksonomy not taxonomy and the implications to design and designers are profound.
When (not if) machines start to get smart enough to return a response or piece of content simply by asking it, we don’t need navigation paradigms, we don’t need trails of screens and instructional content and we certainly don’t have to rely on fixed, structured, webs of influence anymore — we have fluid, organic experiences. Organised chaos. Products that perceive an environment and take actions that maximise its chance of success at a task and goal. Products that adapt rather than sit rigidly expecting a person to learn it, then navigate it.
 
Out of the back of that conclusion, the big question is this;Are we going to witness the slow death of the clickable interface? No, I don’t believe we will, but things are certainly being disrupted in the field of interface design and that trend will only continue to grow as the machines we teach get smarter.

 

Self Learning Experiences

 
This is a true story.
 
One night earlier this year, I was preparing for a talk I was going to give in Copenhagen, when I began to notice some peculiar things going on with a system called ⭕ we’ve been building for the support of vulnerable men. Apparently, our little baby bot 🤖, had suddenly and almost immeasurably improved. I began to experiment with it and was astonished. I knew I had to finish my talk for Copenhagen, but ⭕ refused to relax its grip on my imagination. Our system had demonstrated an overnight improvement roughly equal to the total gains it had accrued over its entire lifespan to that date.
 
Don’t get me wrong, we haven’t created true intelligence. We don’t tick enough boxes for that. For something to be truly intelligent, it needs to coverallthe A’s;
 
  • Aware: Be cognisant of context and human language
  • Analytical: Analyse data and context to learn
  • Adaptive: Use that learning to adapt and improve
  • Anticipatory: Understand likely good next moves
  • Autonomous: Be able to act independently without explicit programming
We’re currently two shortand working hard on those last two.
 
The other implication of this new type of auto-magic experience will undoubtedly be the huge, positive, impact it will have on our economy. In a recent survey of corporations by CapGemini, it was found that;
 
  • 87% of UK senior executives think that automation will increase efficiency and effectiveness within their organisations
  • 75% of UK respondents said automation has created new roles within their organisation (interestingly the UK is amongst the lowest)
  • 68% of UK organisations have already seen a 10% uplift in sales, directly tied to automation implementation
  • When asked about the functional areas they believe to have benefitted from automation implementation, the highest number of senior executives (31%) said customer service, followed closely by finance (27%)
So in reality, despite the negative press, intelligently artificial processes and experiences will boost economy, create new types of jobs and create a kind of renaissance for a lot of skilled workers who’ll get time back to do what they want to do—solving problems that are non-linear in nature.
 
All that said, the reason I’m mentioning it in an article like this, is that sadly one of the industries that is likely to be rocked very hard by this new movement towards invisible experiences, will of course be the interface design space. I’ve been a professional designer for well over 20 years now and I’ve had to constantly adapt or die with each ebb and flow. This new wave is a really big one. Which is why, with US, we’re pushing the boundaries of what adesigned experience could and should be. Taking a step back from the screens and peeling away all the erroneous layers we relied on as a means to get to the end. It’s a simple philosophy—If a machine can mimic communication with a human being, and that human being can effectively communicate back to the machine, how do we make experiences that are more collaborative by design? More natural by design. More humanistic by design. More real.
 
Too many people design the next version of something that exists, rather than the best version of something new.
At US, we try and dodge the term “Chatbot” if we can. Chat is just an interface. Things under the hood are a bit more complex. Sure, we can push our automation out through chat or voice interfaces, but a chat or voice interface is still simply that—an interface.
 
I saw a meet-up advertised recently where a big group of interface designers were getting together to discussdesigning better chat interfaces. I despair. That’s the whole point missed, this new world doesn’t need more dogmatic designing of interfaces, it needs more thought on the words, the behaviours and theinvisibilityof the experience and not whether the bubbles are more bubbly or the buttons are more buttony. What’s important is how these smart products start to simulate (and stimulate) human conversation through its reaction to a request, not how glossy the interface is—it never has and never will be the form of something that makes it work.
One of the organisers of that particular meet-up referred to me as “a cowboy”… yes my friend, those of us that want to survive on the frontier become cowboys.
When we first started the ⭕ project last year for ManMade, it could keep a conversation going for about 2 or 3 minutes. As it learned more and we kept training it to go further, and the errors became fewer, it started to be able to engage for more like 4 minutes and then 5 minutes. Not Turing Test type stuff either, it’s not meant to be ‘that’ kind of smart. It’s engaging enough to keep you rabbit-holing. Seeking more, going deeper, finding more answers or asking you more questions, but it’s certainly no general intelligence. 5 mins might not sound like a lot, but it’s a sign of where things are heading.It’s still the same interface that is has always been. No Ui tweaking going on here, its pure dialog design and corpus training—minimum viable personality—that has improved the experience.

NexUS

What we also found is that simple conversations really become more engaging when lots of little bots are incorporated into it, because those micro-bots can go off and bring back other pieces of data and information that a person just can’t do. ⭕ for example can make a recommendation to a person about where to seek help for a particular human issue, without any delay in communication, simply by observing the language that the person uses and deriving that might need support for other issues. A person to person chat has lags and drop-offs while we go off and Google a next best response or service. A bot does not.
 
Even with the work we’ve been doing, we have a long way to go. I’m just really glad to say that the Nexus Ai platform we’ve been slowly developing, and the experiments like ⭕ that run on it, are starting to get a lot smarter every day. ⭕ still gets about 4 out of 10 queries wrong, which isn’t good enough to launch this on a big scale, but my kids fell over a lot before they could run… so we’re cool with that.
We’ve switched our focus from just NLP (Natural Language Processing—recognising intent and responding appropriately) to much more focus on NLU (Natural Language Understanding—you meant X, Y and Z) in order to really make the outputs more humanistic by design. Over the next two or three years we’ll be designing and developing a totally new breed of human-computer interface that will combine lots of human languages and behavioural technologies to enable information access and transactional processing using lots more passive cues too. It’s notempathic(ignore any journalist or designer who talks about ‘empathic machines’, empathy is a human trait, we’ll keep that one for ourselves thanks!) it’s simply giving the smarter technology more points of reference so that it can give ushelp before we know we need it.

Conclusion

Invisible, passive experience design is where the future lies and we’re heading. Conversational platforms that understand human languages and behaviour to produce accurate results in invisible, natural ways are the direction of travel.
 
The kind of design and technology we’re building at US will solve important human problems & remove complexity from everyday life. We’re creating context specific products with nohistorical design cliches, that enhance human ability but without replacing the human. We’re developing the kinds of services that build long-term relationships without creating emotional dependency and like ⭕, we’re constantly learning and working collaboratively with the technology. Because of that, I have a feeling that Intelligently Artificial products will always refuse to relax the grip on my imagination—which is after all, why I wanted to be a designer in the first place. Not to design more buttons, but to be curious.