What is special about the human form of intelligence? An algorithm is a means of processing information in order to generate knowledge, and we now understand the brain to be simply an organic algorithm. In that respect, it is little different to the computer algorithms which form artificial intelligence. Every year that passes undermines the belief that there are aspects of human intelligence that can never be replicated more effectively by A.I. Machine learning means that even some computer algorithms themselves are evolved by A.I to the extent that humans no longer understand them. It is true that there is no near-term prospect of creating artificial consciousness, but capitalism places no value on consciousness. How will Humanism survive if human beings become economically useless? The same point applies to war, as the mass armies of the 20th Century are replaced by non-human combatants in the 21st Century. Nor is art safe. Music critics have praised music for its soulfulness and emotional resonance, before finding out to their horror that it was created by an algorithm.
Know Thyself
In the future, computer algorithms may be able to understand our desires better than we understand them ourselves. This will not be too difficult, given how limited our understanding is of our own minds. If this happens, there will be an argument that A.I should make our decisions for us, because they are better able to satisfy our desires than we are. This could partially be facilitated by the amount of data we give away about ourselves, from our Google searches, our Facebook Likes and the various biometric data gathered from wearable tech. In the future, when we watch a film or read an eBook, algorithms may be able to understand and remember our emotional reactions better than we can, by tracking our heart rate and dopamine levels and monitoring muscle movements in our face. Nor does A.I feel societal pressure to pretend to like ‘Citizen Kane’ even if you found it boring. If algorithms know us better than we know ourselves, and do not have the faults of the ‘narrating self’ mentioned in my previous post, wouldn’t we be better served at the next election if Google voted for us? On the day that algorithms know us better than we know ourselves, authority may transfer permanently from humans to machines.
Humanists believe that every human life has intrinsic value. What happens to Humanism if the governing elite are able to use technology to upgrade themselves so that they are no longer ‘human’? Historically, the benefits of technology eventually cascaded down to everyone, but that may not be the case in the future. To use Yuval Noah Harari’s example, 20th Century medicine was an egalitarian project aimed at healing the sick. In the 21st century it is also becoming an elitist project aimed at upgrading the healthy. Why would elites divert resources to cure the masses who are economically, military and artistically useless? What happens when the experiences of humans and superhumans diverge, such that they can no longer relate to each other?
Techno-Humanism
The threat to Humanism comes when humans are no longer able to compete with artificial intelligence. One way to counter this threat is to try and upgrade the human mind. This might lead to a new branch of Humanism called ‘Techno-Humanism’. However, this comes with substantial risks. Scientists already have the ability to alter the human mind without having much understanding of it, and therefore with little understanding of the side effects these alterations could cause. Further, what happens if changes to human minds are driven by economic objectives? As a result of the agricultural revolution domesticated animals generally became more docile, because docile animals are easier to manage. Will we become more like efficient data processors, and will that really be an upgrade?
The greater problem is that Techno-Humanism does not resolve the fundamental conflict between technology and Humanism. Humanism says that we should follow our hearts and stay true to ourselves. Technology seeks control, so that it can solve problems by ‘fixing’ or upgrading us. From that perspective, our inner voices are conflicting, random and inefficient. Trying to follow only one of them is not the best way to achieve happiness. If that is true, technology should therefore try to manipulate and suppress our inner voices. Techno-Humanism still rests on the assumption that our desires give meaning to our lives, but how can that belief be sustained as it becomes clearer that we are not in control of our desires, and our desires can be so easily manipulated by technology?
Dataism – The New Religion?
Another path is to abandon Humanism altogether and replace it with a new religion called Dataism. This religion says that the universe consists of data flows, and that the value of anything is determined by its ability to generate and process data. This is a religion because it makes ethical statements of value: the ultimate good is to maximise the flow of data. For true believers, there is no meaning in something if nobody knows about it. Human history, when understood from a Dataist perspective, is simply the history of improving the efficiency of data-processing. Humanity is a data-processing system and we, as individuals, are its chips. Human history has consisted of increasing the number of chips and their connectivity, in order to process more data and allow it to flow more freely. The aim of Dataists is to create a global data-processing system (an ‘Internet of Things’) which is all-knowing and all-powerful. Connecting to the data flow is the source of meaning in life, rather than human experiences. Dataism is not against human experiences, it just doesn’t think they are intrinsically valuable. Humans were valuable as the world’s most effective data-processors, but that is no longer the case. What about human emotions such as love? They are just biochemical processes, evolved on the African savannah to help pass on genes.
Dataism can help us to understand recent phenomena. Viewed from this perspective, Capitalism and a Managed Economy are simply two different data processing systems. Capitalism won the Cold War because distributed data processing systems are more effective. Dataists believe that A.I is superior to humans in its ability to process very large amounts of data, so believe that knowledge generated by A.I is superior to human knowledge. This is not just theory: how many people would ask a friend how to get somewhere rather than trust Google Maps? The stock exchange is the fastest and most efficient data-processing system ever created, and it is now too complicated to be managed by human beings. The vast majority of investment decisions are either made by A.I or based on A.I recommendations.
Power to the Machines
Democracy is another decentralised data-processing system which has outperformed more centralised models in recent times. What happens when democracy can’t keep up? This is already happening with the Internet and Social Media. These things evolve so quickly that any democratic debate is superseded before it reaches a decision. One wonders what governments think about TikTok. This does not mean a victory for Totalitarianism, which is an even less efficient data-processing system. Government has increasingly felt powerless in recent years: they manage but do not lead. This might be because they simply can’t keep up with modern life in the Information Age. It is unlikely that this power vacuum will persist forever. In the past we followed God (through His human representatives), and in the modern age we choose our own representatives to follow (whether explicitly by electing leaders or implicitly by accepting Dictators). In the future we may follow algorithms.
Dataism may turn out to be based on incorrect knowledge (e.g. organic minds might consist of more than algorithms), but that may not prevent it taking over the world. Even if you believe in one of the theistic religions (those that include a God or gods) that still means all of the others are wrong, and yet many have been very popular. Harari argues that aspects of Dataism have already become established within the scientific community. The increased tendencies of young people to share their data may also be a sign of things to come. What happens if Dataism takes over the world? The Humanist pursuits of human health and happiness may seem far less important. If power shifts from humans to algorithms, we may find our desires are manipulated to support the new priorities without realising it. In trying to eradicate illness and live forever, we may find that we ourselves become nothing but electronic data; a part of the Internet of Things.
The potential replacement of Humanism with Dataism is clearly not the most urgent problem facing humanity, and neither is it a certainty. Harari is always keen to remind us that he does not claim to know the future: his book Homo Deus is a work of possibilities not prophecies. Nevertheless, the transfer of control from human beings to A.I has already started. The challenge of how we make these new technologies work for the benefit of human beings in the long term is one that we need to all start thinking about. It is too important to be left to the multi-billionaires of Silicon Valley.
One thought on “A Post-Liberal Future (part 2)”