Thinking About Thinking

7th May 2018
“Thinking emerged to serve the active, developmental, linguistic and loving interests of thinkers. And it’s because they don’t have such interests that computers can’t think” Ann Long.
The idea of machines that can ‘think’ is something that’s captivated the imagination of both the data science community and the pop-culture guzzling public. For a machine to be able to think, would be like creating artificial life itself.
Humans have long seen themselves as being at the top of the animal kingdom, set apart from other creatures by our superior intelligence, and at the forefront of those mental powers are our abilities to reason, and to think.
But what is thinking? And if thinking is the unique skill that sets us apart from other animals, why is it, that as a species, we all think so differently? Why is it, for example, that when presented with the same facts, and information, one person may draw one conclusion while another might conclude something completely different? If that in itself determines intelligent thought, then what implications does that have on the theory of the thinking machine?
The tech community is good at throwing this notion of the thinking machine around without a proper philosophical grounding of what it means to ‘think’ (note my choice of the word philosophical). It’s all too tempting to assume that a machine that could think, would think like us. But, with a bit of reflection, that’s probably not an inevitable conclusion. Why would machines ever ‘think’ like us?
I’m vocal in my view that machines don’t, can’t, and won’t think and it’s going to take a lot to make me back-track on that view. But to make my case against the concept, in a more structured way, I think it’s worth us exploring what ‘thinking’ might mean.

Thinking

 To begin with, we’d better be clear about what I mean by ‘thinking’. A comparison between machine and human thinking is what the tech community (and the mainstream social media and marketing community!) are talking about most of the time, but what about animal thinking? Does a chimpanzee think? Does a cow? Does an octopus? Is thinking just computation? If it is, then yes, I concede that a computer may one day ‘think’. But thinking is a lot more than that.
Thinking is a messy process. I’ve seen my kids trying to crack a tricky situation… their pupils dilate, core temperature rises, the heart races, blood pressure skyrockets, respiration can sometimes become rapid, and the brain fires bursts of electrical impulses from nowhere to nowhere… all that, just to try and decide which flavour of ice-cream might work with another when there are ten options available.
In his book, “On Intelligence”, Jeff Hawkins believes that the central function of the brain is to remember, and that thinking is therefore just a “pattern-matching” process. Which, if correct gives some credibility to the concept of a thinking machine. That’s basically what algorithms do, right?
But when we dive under the hood of this thing we liberally refer to as Artificial Intelligence (at least as we currently imagine it), what we find is more than pattern-matching. It’s about sheer computational power. It’s about calculations per second, and the number of potential decision pathways. It’s about outcomes decided by algorithms. Stochastic predictions and so forth. As I acknowledge that machine thinking is basically really clever computation, I also acknowledge that in comparison, the human brain is actually pretty bad…
Most of us mere mortals are frazzled multiplying a couple of two-digit numbers in our heads, or choosing ice-cream flavours, let alone performing trillions of calculations a second. Maybe there is some deep processing of data that goes on below our awareness that ultimately results in our arithmetically impaired minds?
Let’s just agree, that what computers are good at (the raw data manipulation I mention above), humans are quite weak at; and what machines are bad at (such as creative language, speech, poetry, voice recognition, interpreting complex, irrational behaviour, and making holistic judgements), humans are actually quite adept at.
I’ve no doubt the data-science community will argue that machines are mastering language and voice recognition etc… but it will never be as skilful as a fully functioning human!
So, as we make the comparison between human and computer “thinking”… what bothers me is that the argument doesn’t match up. Why would we expect computers to eventually ‘think’ like people? For them to do so, perhaps the computers of the future will need to lose their natural arithmetical aptitude as the full weight of their consciousness emerges? Perhaps they need to be… more human!
“The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.” Terry Pratchett

The Art of Being Human

 Other factors are in play. Our thinking and thought processes are often distorted with distraction and even mental health, and so forth. So, if you do buy into the concept of a ‘thinking’ machine, you must also buy into the concept that the same machine can have its thinking corrupted by external and internal forces. They need to be mentally unwell as well as functioning optimally. Think about that for a moment. I argue that it’s some of these mental quirks that make humans such beautiful thinkers, and ultimately better than any machine at ‘thinking’.
Earlier in April, I sat in a Van Gogh exhibition in Dubai. As I stared into the distorted mind of such an influential artist, it occurred to me that his situation shaped his art. His mental conflict. His thoughts were corrupted which made his stories in paint so visceral. I don’t believe machines will ever think that way. They’re designed to be… well… perfect.
Jazz illustrates another kind of artistic thinking that does not involve words. The first thing to say is that we don’t know what’s going on in a musicians head as they play. Music is an odd thing, psychologically speaking. Music is like thought, in that it has structure, emotions, and logic. So what does this have to do with machines that can think? Well, I would posit that for a machine to ‘think’, it would need to be able to replicate some of those complicated, emotional thought processes like improvisation, and reading the room to change a chord. To react emotionally.
“Words make you think. Music makes you feel. A song makes you feel a thought.” E.Y. Harburg
Developmental studies will continue to be relevant, as will research on humans and other animals who do not and have never had language. But, by thinking more carefully about these important concepts, I still think we will have made progress.
If you think about ‘thinking’ from that human perspective, perhaps artificial, non-thinking computers seem in many ways more useful to us than exact copies of ourselves. They do things we can’t—they can do better things, rather than things better. They don’t have to think!

Survival of the weakest

Here’s another perspective… humans are members of the class animalia. Each human exhibits animal characteristics. Humans live, as animals, and we can speak… computers don’t. Not like us anyway. Each human develops a capacity that is unique to each individual, because of speech. Speech helps us to internalise the spoken word into non-spoken thought: a function which means that we can cease merely to “inhabit” the present world, but come instead to “make” the past-present-future world.
To build on that idea, if language plays such a vital role in the development of a humans thinking, and is dialogic, then inevitably that limits how much a machine can think? At least until we have machines that can adequately explain their logic to us. When a machine can articulate its internal conversation between different perspectives, and explain the ‘give-and-take’ quality of how it processes something, then I’ll concede that we might be on the path to a machine that can ‘think’.
But because humans are these linguistic animals I refer too (vocal for those of us who use spoken language, visual for people who need to use sign language to communicate), it gives us all a unique interconnected, ecological relationship with each other. I’m sure there are a thousand examples of machines mimicking those things, but until they are able to react, and proactively communicate through linguistic capabilities logically, it’s just a process, not an actual thought. Machines need to be able to communicate with each other as we do, to learn, and to explain.
Thinking people do many other things… things that I agree computers can attempt to copy. Thinking is everything that the conscious mind does, including perception, some mental arithmetic, remembering and recalling things, or conjuring up an image of something from memory. Yes, we can train a machine to mimic those functions, and with this definition, thinking means merely conscious cognitive processes and is very binary and mechanical. Tick. Machines match and often best us at those things. But is it thinking?
That definition is pretty broad, and quite simplistic when we consider the real wonder of people, and who we are. What if we look at the psychodynamic interpretations of thinking? When we do that, we quite quickly arrive at concepts like ‘unconscious thinking’. There are tremendously important unconscious cognitive processes shaping the way we make sense of the world, but, again, I’m happy to back-down and agree that in the most part, ‘thinking’ seems to be quintessentially a conscious thing, so machines probably won’t need to do the unconscious cognitive stuff to be perceived as ‘thinking’.
But here’s the real biggy for me. What about ‘the experience of thinking’. What we see, hear, and feel affects our thoughts. Thinking is sensory. Phenomenology can be misleading here; it makes it all about the human brain. I think therefore I am, is not true. I am, therefore I think, feels more justified. Just because experience seems to be a certain way from the outside, that doesn’t mean it’s a true guide to what’s going on our brains. A thought can be instantly altered and changed by a loud noise or a smell. Breaking concentration if you like. Changing direction.
Imagine for a moment that you are doing something that we would intuitively describe as thinking (say, while we’re walking to work or reading this article), we have a sense of a flow of inner speech. Our thinking has a verbal quality. It often feels as though we are talking to ourselves: not all the time, perhaps, but for an essential part of it. It might seem that we have words in our heads when we don’t actually. Do machines do that? No. They don’t. They process. If I distracted a machine, it would probably carry on processing.
When people see, feel smell, taste or hear something we add that information to our stored memories to “fill in the blanks”. For example, if I showed you a photograph of a close friend or relative with 75% of the picture covered up, I am sure that it would take you no time at all to work out who it was as your memory would quickly fill in the missing 75%. However, if I tried the same exercise using pictures of people you knew but who were far less familiar to you, you would struggle to achieve anything like the same degree of success as your memory of their faces would be nowhere near as reliable.
Thinking is, therefore, a process of comparing our stored memories to either new information or other stored memories. Since we naturally give a higher weighting to the most potent thoughts, those that conform to “our view of the world” will tend to take priority over memories which might not map to this view. If this is indeed the case, then it would explain why, when faced with the same facts, different people can draw radically different conclusions.
So my narrower definition of ‘thinking’ goes like this. Thinking is conscious, and it is active. It needs the physical inputs to fill in the blanks. It is the kind of cognitive process that can make new connections and create meaning when internal, and external stimuli are thrown at it. Can machines replicate something so intricately subjective?

Stick to what you’re good at

“Judge a person by their questions rather than by their answers.” Voltaire

In 1964 Joseph Weizenbaum created an early natural language processing computer program called ELIZA. It was designed to demonstrate the superficiality of communication between humans and machines.
ELIZA simulated conversation by using a simple ‘pattern matching’ and substitution methodology that gave people an illusion of understanding on the part of the program, but had no built-in framework for contextualising events.
It simulated a Rogerian psychotherapist, conversing with people by appearing sympathetic, asking bland questions, and asking the patient to clarify what he or she had just said, or discuss how he or she felt about it. It worked because the bot did not need to engage in any detail with the actual content of the patient’s problems. The patient did all the work. This is ideal for a computer program, which can attempt to carry on a conversation without having to understand anything the human says at all. Was it thinking? Not at all.
Weizenbaum’s trick remains one of the classic methods for building a chatbot—we use it in SU. We effectively taught our machine to be Socratic. But it’s not thinking, it’s understanding, processing, and reacting. It’s brilliant, but it’s not ‘thinking’.

Biological vs. Mechanical

So far, so good. I am claiming that thoughts and thinking are so much more than complex processing. Now that we have a slightly more definite sense of how I think of thinking, we can try to define it regarding other things that are going on, cognitively and perhaps neurologically. And then maybe we can make some progress as to whether or not a machine can ‘think’.
I have suggested that thinking is in part, inner speech. That’s a strong claim, and it requires another step to the argument. We usually assume that inner speech is just one similar kind of thing: a flow of words in the head which appear to us, subjectively, like heard language.
Inner thinking relies on a process of transformation involving both semantic and syntactic. In a nutshell, the language that is to be internalised becomes abbreviated, so that inner speech becomes a ‘note-form’ version of the external dialogue from which it derives. In its condensed form, the language that forms inner speech has all of its acoustic properties stripped away, losing the qualities of tone, accent, timbre and pitch that distinguish spoken language. We see some aspects of this process in action in children’s private speech, which can be seen to undergo the same transformational processes as it gradually becomes internalised. I’ve observed my kids playing games when they were younger, they skipped words completely when they were lost in their own happy little world.
In this kind of thinking, we are still using language, but it may not subjectively seem like spoken language. At other times, our thinking takes the form of the second kind of inner speech, expanded inner speech, where subjectively we do experience a full-blown internal dialogue playing out in our minds. We have a sense of participating in an actual internal conversation, with one point of view answering another, just like a dialogue is spoken aloud between two people. In that sense, thinking is more than a dream. A skit being played out inside our minds.
Together, these two forms of inner speech make up a narrower category of ‘thinking’. We are often conscious of thoughts that cannot be put into words. There are at least two reasons why this might be so. Firstly, thinking does not equate to consciousness, so of course, we can be conscious of things we can’t express verbally. Secondly, the experience is the one you might have when you are doing a short inner speech. The thinking is not fully verbally expressible simply because it has not yet been expanded into full, recognisable language. This kind of thinking could be likened to the rain before it falls. A thought is like a ‘cloud shedding a shower of words’, only fully expressible when it is converted back into regular language. The rain is there in the cloud, but not yet in the form of raindrops.
In fact, I think we do most of our thinking in condensed inner dialogue, and I believe that it gives our cognition some exceptional qualities, such as flexibility, creativity, and open-mindedness. Our brains have evolved to meet specific critical demands, and specially developed, relatively autonomous systems may subserve many of their functions. Condensed and expanded inner dialogue are the basis for the internal conversation which allows us to integrate the different things that our brains do. It’s this that I call ‘thinking’. It’s this illogical method, that we’re not training the machines to replicate.

Summary

To wrap up… that’s my slightly messy, rambling overview of what goes on in my brain, when I say, that I don’t believe that ‘machines can think’. I mean ‘thinking’ in the broader sense. I don’t doubt for a second that a machine can process things better than we can. But will it have a rich, fascinating, conscious mental life? I doubt it. That’s something our flaws allow us to achieve. We wouldn’t make a flawed machine, that’s illogical. But, as the machines start to master the language to pull things all together, and evolve past where they are right now, maybe something akin to ‘thinking’ will emerge. Maybe something better? Thinking is something that takes time to develop. Speech and thought have to become integrated. When they are integrated, something extraordinary starts to emerge… humanity.
“Cogito ergo sum. (I think, therefore I am.)” René Descartes