Rankism – Opinions Embedded in Code

8th December 2018

In 1864 computing pioneer Charles Babbage wrote; “On two occasions I have been asked, ‘Pray, Mr Babbage, if you put into the machine wrong figures, will the right answers come out? I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

Fast forward 154 years, and we’re sitting amidst one of the most significant emerging opportunities and potential upheavals in human memory. There’s been lots of chat recently about training algorithms with historical data that could be ingrained with negative, historical bias. We’re become all too aware that some potentially devastating social repercussions arise when human predilections (conscious or unaware) are brought to bear in choosing the data we use, and which data to disregard. A.i – in particular, both machine learning and deep learning – take large data sets as input, distil the essential lessons from those data, and deliver conclusions based on them, that’s how it works. Furthermore, when the process and frequency of data collection itself are uneven across groups and observed behaviours, it’s easy for problems to arise in how we analyse, learn, and make predictions from that data.

“It’s tempting to believe that computers will be neutral and objective, but algorithms are nothing more than opinions embedded in mathematics.” Michelle Alexander

The more I’ve studied the situation, the more I understand it, and the more I believe that, yes, it is a genuine threat to the integrity of automated react-and-respond services. If our own behaviour is inherently flawed, teaching algorithms to mimic those flaws is only going to end with the exponential growth of negativity.

So as we deploy machine learning and deep learning algorithms into the wild, there is likely to be a lot of case-studies where the issues of potential bias have become hardened into data and algorithms. It’s going to be a rocky ride. Our biases have a tendency to stay embedded in data-sets because recognising them, and taking steps to address them, require a profound mastery of data-science, as well as a more meta-understanding of social forces, including how the data has been collected in the first place. When all is said and done, removing bias from automation is proving to be among the most daunting obstacles, and certainly the most socially fraught, in our journey to what we might choose to christen ‘true‘ A.i.

Let’s consider, for example, the fact that many of these learning and statistically based predictive models implicitly assume that the future will be just like the past—that’s a big problem. Basing future decisions on historical outcomes is about the dumbest thing people can do. Especially when you look at today’s irrational behaviour. We only have to turn on the TV, or open up a web-browser to see the problem in multi-coloured abundance. Puerile, delinquent behaviour from world leaders. Angry, defiant, bickering business heads. Add to that the vast swathes of the public on social media whose anonymity provides them with a platform to abuse [people] whom they perceive to be the ‘weak’.

It’s predatory. Primal. It’s real. What the online realm does is amplify extreme versions of ourselves—it gives rise to a particular type of bias, which we’ll get on to next.

Spectres in the machines

Rewind a few weeks. Us Ai just completed a really humbling study for a new client, Harry’s. We deployed a narrow listening-bot to chat, ask, and capture thoughts from a group of men across multiple ages and demographic backgrounds. Our goal was to attempt to create a barometer for certain attitudes towards life, love, work, health, and wellbeing. We collected lots of incredible insights (c22,000 in 3 weeks to be precise). We observed feedback relating to issues of loneliness. Feelings of inadequacy. Fear of not earning enough to provide. Feelings of existential dread. Worry about failing the people closest to us. Many many humbling things. But what we also spotted were patterns concerning inherited patriarchal attitudes. Opinions about a perceived role and responsibility to be the hunter-gather… It was pure fight or flight stuff like; “my dad grafted really hard to feed us, I’m worried I won’t be able to do the same for my family”. We also didn’t detect anything that might typically be interpreted as deliberate or contrived, on the contrary, underlying a lot of factors driving these feeling, and related behaviour, appears to be an abundance of subconscious ‘fear’.

“I rank myself no higher in the scheme of things than a policeman – whose utility would disappear if there were no criminals.” Lord Salisbury

From my perspective, subconscious fear is one of the worst of humanities neurosis. Fear has a tendency to bring out one of two things in people;

  1. a recoiling inwards into one’s self (flight), where negative feelings can eat at us like a cancer,
  2. or it drives out something psychologists are beginning to refer to as ‘rankism’ (fight).

Rankism is the really nasty one, because it’s where all the other ism’s begin.

Rankism – One ‘ism’ to rule them all

The textbook definition of Rankism refers to an “abusive, discriminatory, or exploitative behaviour towards people because of their rank or perceived rank, in a particular hierarchy“.

“I don’t understand you / my instincts are on high-alert / I’m going to climb over you ignorantly.”

The fear of being less, or falling behind ignites egotistical tendencies that ultimately drives all rank-based abuse phenomena such as bullying, racism, hazing, ageism, sexism, ableism, xenophobia, mentalism, homophobia and transphobia… to name just a few of the ‘isms’ and ‘obias’. Rankism isn’t isolated to a single societal group either, we’re all guilty of it at some point regardless of age, socio-demographic, agenda… we’re human, and at some point in our lives, we will try to outrank someone, or something else.

Once you’ve identified the macro cause, you see it everywhere. Rankism becomes an obvious umbrella term to encompass many many of the problems that have driven the unpleasant side of humans since time began. Rankism is a residue of predation (our species, Homo sapiens, has a long history of predation). We’re not only good at it; we’re the top of the food chain because of it. Of course, we do more than prey on animals and each other, we also cooperate, we love each other, we have shown ourselves to be capable of living in peace and harmony. But a large part of our standing in the world is because of our rank in the grand scale of things.

As a term, it first appeared in print in the Oberlin Alumni Magazine in 1997, and later appeared in a book called Somebodies and Nobodies: Overcoming the Abuse of Rank, written by physicist, and citizen diplomat Robert W. Fuller.

According to Fuller, Rankism can take many forms, including;

  • Exploiting one’s position within a hierarchy to secure unwarranted advantages and benefits (e.g. massive corporate bonuses);
  • Abusing a position of power (e.g., abusive parent or priest, corrupt CEO, bully boss, prisoner abuse);
  • Using rank as a shield to get away with insulting or humiliating others with impunity;
  • Using status to maintain a position of power long after it can be justified;
  • Exporting the rank achieved in one sphere of activity to claim superior value as a person;
  • Exploiting status that is illegitimately acquired or held (as in situations resting on specious distinctions of social rank, such as racism, sexism, or classism).

Makes a lot of sense when you see some of the outputs laid out, but here’s the rub… the last few years have been anything other than average. This primitive, predatory strategy of a lot of people, driven by fragile psyches isn’t working any more. Ultimately the victims of this situation experience the abuse of rank as an affront to their dignity, but the weak are not as weak as they used to be, so picking on them is less secure. They’re fighting back using modern technology which has given weapons of mass disruption to the disenfranchised. A campaign to push back on an ‘ism’ can now bring modern life to a standstill while people protest and say “no more”.

Of course, caught in the cross-fire of this war of words, and aggressive progression, are many people on the fringes of rank-based behaviour, who feel they have a place in the queue, but now feel compelled to step back, or step up. Often pushing them into the centre of it. It’s becoming a self-fulfilling prophecy — and for me, the prospects of this being amplified by algorithms are terrifying.

“I don’t really worry or waste my energy on where I rank and this and that. I know where I’m at.” Eric Weddle

Having spent many many weeks pouring over data from our research (yes, I acknowledge that it’s only one group of people, but as a sample, it’s pretty reflective of the sentiments many people might be feeling) I am now of the firm belief that the human nature argument is flawed. Yes, there are naïve, sometimes instinctive urges to outrank someone. However, we must acknowledge that society has a broader issue to solve than just gender, race, class etc. Things that are more widespread and ingrained that we must fix — I believe now, that if we start to focus our attention on solving Rankism, we can prevent a lot of the other issues before they get upstream. And that my friends, is where I believe A.i can play a crucial role if we open our minds a little bit.

Focus on training algorithms with a positive bias

To bring this all back into the context of data, algorithms, pattern spotting and favourable opportunity; I’ve been watching some interesting work happening that is focused on removing bias from companies, and data. For instance, Josie Sword is pushing her initiative to train ChatBots with Feminist values… something I support, and something which I inputted into, and positively challenged when it was first brought to my attention at her thesis stage. I’m not convinced that judging a work’s “soundness” is any less subjective than judging its “importance”, but it’s an amazing start on a journey that is really important.

However, what if we’re pitching the opportunity too low. Or actually teaching a machine feminist values (or any values!) is actually just another form of bias? What if we need to start training algorithms to focus on Rank based patterns instead, and start seeding in an ethical framework that looks to break that cycle of Rankism in people. The results in society could be profound. If Rankism is about fixing the game to handicap or eliminate the competition, in order to improve someones chances of coming away with the spoils, then we need to train A.i to understand that there is no prize.

So let’s call it Trainor’s idea of ‘Rankware’. A framework to train machines to spot and do ‘something’ about Rankism. And make that a source of truth for the machine.

You can stop reading here if I’ve given you 5 minutes of Pete thinking. Perhaps just raising awareness of Rankism might be enough for you. But below here, I’m going to get very philosophical and dystopian. So if you go below, just go with an open mind.

The risk of training machines to understand rank

OK. Let me finish with a bit of a dystopian narrative.

As I put to you above, the opportunity to start training machines to understand, and mitigate issues with ‘Rank’ could well solve many of today’s societal problems that stem from inherited attitudes from the past. Creating an algorithm (or RBP — rules-based-personality — as I prefer to think of it) to process inputs and outputs, which spots Rankist behaviour could be as important in the online world, as the Civil Rights Movement and Women’s Movements are in targeting racism and sexism. Rankware could raise not only an awareness of our ingrained behaviours but the broader implications of them, in realtime, all over the internet. Very cool. In my mind anyway, a piece of code to detect Rank and measure the impact of that attitude (across all geo-demographic groups) could be world changing. But…

…let us now fast-forward and imagine a world where scientists (not tech companies!) have created a machine that is displaying a kind of hypothetical artificial super-intelligence. It’s a scenario that’s been written about at great length by Ray Kurzweil, and many many other people I respect. If that machine tips over into artificial consciousness (hypothetically), and we’ve trained it on my Rankware concept, it’s going to judge humanity based on the world we taught it. We’re going to give it a history lesson, with a moral code that says; “Rank is bad, therefore no rank please”. That is pretty bad news for most of us because Rank is something we just do on instinct every day. We’re all a little bit ‘ist’ whether we think it or not.

It’ll look at my ‘Rankware’ framework, look at all our data (social media!), and basically go; “oh, well he really made a big thing out of being ‘the boss’, so he’s got to go…” and “she really tried to climb over her colleagues to fight for her cause, no thanks…” etc etc. So the past becomes the problem, even if the future becomes more balanced.

It’s a deep thought, I get it. Sorry. I’m basically just saying that if I architect Rankware, I put a massive ‘Thanos Finger Snap’ in for anybody that’s ever called Rank on something.

This theory plays right into an older argument called CEV – coherent extrapolated volition. That theory itself is pretty heavy, but for this thought experiment, it’s about hypothetical artificial super-intelligence that at some point in the future is optimised for actions only of human and social good. However, if that A.i ‘looks’ at the world (read ‘analyses all our data’), and it then decides to follow the directives it was programmed to do, and make things better than they are, then that could be catastrophic for humanity because it will look at social activism, merge it with social engineering, and because it will make its choices based on what is best suited for achieving ‘human good,’ it will never stop pursuing that goal, because things could always be a bit better. Got me?

Summary

Yes, rankism is what we’ve done through recorded history-one person to another, one group to another, one tribe to another, one nation to another, and until recently, the gains were judged to exceed the costs. But that can’t continue. Teaching the machines that having no rank is better than having a rank, is a route to at least a future where algorithms don’t judge us on anything. If I create A.i that targets Rankism, I could help craft a world of dignity for all, not just for some at the expense of others.

As Robert W. Fuller wrote; “As we disallow rankism, we build a dignitarian society, a world in which, regardless of rank, everyone experiences equal dignity.

So let’s programme that, and at least see what happens! Some of us may meet our match on the big ‘finger snap’ day when the machine wakes up (actually most of us if we’re honest) … but maybe it’s also a good framework for a future beyond that day? It’s also a good motivator to start behaving a bit nicer to each other.

Sources and further reading:

Rankism