Search
  • Pierre Smith Khanna

Homo Sapiens or Homo Deus?


What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?

So ends Yuval Noah Harari’s Homo Deus, his most challenging book to date. In fact, it was so challenging that a few chapters in I had to put it down. It laid there beside my bed for a good few months until I decided to give it another go. Thinking about how science and technology might look like in the future isn’t really my cup of tea you see. There seem to be more pressing issues in the world - global warming, Brexit, the war in Syria, the rise of Marine LePen to name but a few. Yet somehow these all pale in comparison to what might be coming. A very uncomfortable truth.

Perhaps truth is too strong a word; as Harari himself notes: all the scenarios he outlines in his book should be understood “as possibilities rather than prophecies. When we think about the future, our horizons are usually constrained by present-day ideologies and social systems.” Yet, simply looking at what is going on today is sufficient to make one wonder what is yet to come.

One such example, seemingly innocent enough, is the meteoric rise of GPS navigation systems. I remember asking a London black cab driver once, what he thought of GPS, and whether he thinks he might use it one day. He was pretty confident that he never would - and had good reasons to believe so seeing as he knew all of London’s streets better than he could spell his name. But what are black cab drivers’ many years of experience compared to ‘Waze’, an app that connects hundreds of thousands of other GPS systems to itself, updating it in real time about traffic flows and police cars? Today, every cab driver uses Waze because it’s just better, faster and more efficient. Consistently. You are about to turn right, but Waze says ‘turn left’. And when you start listening to Ways, you realise that it knows better than you. So you begin to always trust its advice.

This is just one of the myriad of ways in which algorithms, designed by tech geniuses in Silicon Valley and hackers all over the world, and fed by an unimaginable amount of data, will become our preferred way of making decisions. Forget an hour’s talk with your closest friend, or a quiet walk in the park to churn over your thoughts. Soon Google, Amazon Echo, Facebook and more yet to come, will all be able to tell you what, for you, is the best thing to do.

This advice will move from the tarmac to life paths and junctions. Should I marry this man? Google knows your entire history, your spouse’s entire history, your likes and dislikes, your characters and personality traits, how many times you’ve been on holiday together, how often and with what quality you have sex together, how much time you spend together and how often you complain about one another to your friends. So long as Google has access to all your data - your emails, your credit card purchases, your biometric-wristband - “it will definitely know you much better than you know yourself.”

“The self-deceptions and self-delusions that trap people in bad relationships, wrong careers and harmful habits will not fool Google […] Naturally, Google will not always get it right. After all, these are all just probabilities. But if Google makes enough good decisions, people will grand it increasing authority. As time goes by, the databases will grow, the statistics will become more accurate, the algorithms will improve and the decisions will be even better. The system will never know me perfectly, and will never be infallible. But there is no need for that.”

As long as Google knows you better than you know yourself, that’s sufficient enough to relegate human knowledge and intuition to the dustbin of history. In a remarkable study, researches built an algorithm which takes your Facebook likes and from them comes up with a diagnosis of your personality. 86,220 people took part in this experiment and had to fill in a 100-item personality questionnaire. The algorithm then predicted the participant’s answers based on their Facebook likes. The more likes, the more accurate the predictions. The algorithms predictions were then compared to those made by work colleagues, friends, family members and spouses.

“Amazingly, the algorithm needed a set of only ten Likes in order to outperform the predictions of work colleagues. It needed seventy Likes to outperform friends, 150 Likes to outperform family members and 300 Likes to outperform spouses.”

The people conducting the study concluded with the following prediction: “People might abandon their own psychological judgements and rely on computers when making important life decisions, such as choosing activities, career paths, or even romantic partners. It is possible that such data-driven decisions will improve people’s lives.”

So what’s the problem? If these algorithms can help us, why not use them? And indeed, this is probably the line of reasoning many of us will take when offered such promising technology. Only, at the same time, we will be consenting to the fact that a computer algorithm knows us better than we know ourselves. We, homo sapiens, are no longer the unique harbingers of knowledge. We are simply an algorithm, which can be known and figured out so long as enough data is provided.

Does that change anything? Perhaps it’s just a reality-check for us homo sapiens. We’re no different to the laws of physics we study and discover, the rats we toy with in the lab, the animals we domesticate and breed in the farm. All obey the same fundamental laws of mathematical equations and there’s nothing more to it. So why all the fuss?

Well, two things. For one, accepting this destroys our cherished concept of free will, something most people won’t be okay with, which is why they will try to cheat the system. What do I mean by that? I mean that they will seek to retain their advantage over clever algorithms like Google, and know themselves better than anyone or anything else knows them. In fact, they will seek to be the ones in control of the algorithms.

Elon Musk, the CEO of Tesla and founder of pioneering projects such as PayPal and SpaceX, recently announced on his Twitter feed that he is launching a new project, Neuralink, designed to bring enhanced neurological functions to humans. Musk, a man who generally delivers on his outlandish projects, fears that if humanity is not careful, it will soon be surpassed by its digital creations. Surpassed and then annihilated. This is what Neuralink is for - to enhance humans so that they keep a step ahead of the machine.

If you think this sounds like a Sci-Fi thriller, well, it does. That doesn’t make it any less scary though, nor any less grounded in facts. Not only might we lose our free will (or realise that we never had it to begin with) but the potential response to that will be to become cyborgs. This will inevitably herald a new era of unprecedented inequality: between those who can afford Musk’s ‘neural lace’ implants and those who cannot. Between the regular human and the enhanced human. Homo Sapiens and Homo Deus.

A chat with Felix, a good friend of mine, while working in his farming community in Germany reminded me of why Musk’s fears might not necessarily come true. While Harari’s book is certainly crucial reading for anyone interested in shaping the world to come, it focuses on one side of the spectrum of human activity. Harari certainly has good grounds for being weary of the mounting cult of Science and its omnipresence in contemporary society and culture. Yet this isn’t the only phenomenon taking hold of human consciousness and activity: millions of people are engaged in a myriad of projects exploring totally different ways of doing things, from the explosion of the Town Transition Movement all across the world, to social movements which continually rise up and make themselves heard whether in the name of Bernie Sanders, racial equality, economic and social justice, or saving the planet.

Perhaps Amazon’s Echo will one day be capable of providing me with such a stimulating conversation as the one I had while weeding that bed of cabbages with Felix. The question facing us now is whether we want to let it get to that. We always, always, have a choice. We are all the woman who will spend her day taking to Echo and purchasing her items from Amazon’s ‘recommended for you’ list just as we are all the man who refused to budge when faced with that tank in Tiananmen Square. What it comes down to is this: will we take control of our own destinies, or will we delegate those decisions to someone or indeed something else?

The history of humanity is the story of humans pioneering their own visions, and those that follow them; of people taking charge of their own lives, and people waiting for the military to dig them out of the snow; of people fighting to preserve their traditions and of people giving in to CocaCola and McDonalds. Elon Musks’ vision is one of empowering individuals to take control of their own destinies but it is a dangerous bet which runs the risk of dividing humanity up beyond recognition.

The war in Syria, Global Warming, Trump’s latest outlandish tweet, Turkey’s referendum, LePen’s looming shadow… It’s not easy to know what to focus on when we are deluged with information. How many times have you read an article like this one and felt ‘oh yeah that’s so true’ only for it to be replaced by the very same feeling when the next article comes along and captivates your attention?

Krishnamurti once said “that which is not understood and completed will repeat itself again and again till it is; there is no escape from this”. Until we understand why we so easily get distracted, why we refuse to take responsibility for so many of our actions and why we perpetually seek pleasure, comfort and convenience, we will continue to do all these things. Harari has shown us where this behaviour might lead us. Are we nothing more than an algorithm? The challenge is to explore the answers to that question through one’s actions.

#Article

0 views

© 2020 by Pierre Smith Khanna