The Intelligence Race

9 February 2015 by in Codebreaking our future

By Michael Lee, author of Codebreaking our Future

One of the occupational hazards of being a futurist is worrying about things long before they happen. One aspect of the future that worries me a lot is how ill-prepared we are for coping with the continued acceleration of the role of Artificial Intelligence (AI) in the economy.

Just as humans once outstripped animals in the race to dominate Earth, due to our superior brainpower and use of tools, the danger now exists that homo sapiens will fall behind supersmart machines and AI systems in terms of overall efficiency. On top of that, the coming rise of cyborgs, technology-enhanced and AI-enabled humans, could lead to an intelligence divide between them and us which would be even more serious than the digital divide currently prevailing in the field of economic competitiveness.

While the mass media gradually dumb down human culture to about the level of sentience regularly exhibited on the Jerry Springer Show, and while human thought is increasingly fragmented and trivialised by social media like Twitter, celebrity gossip and media-propagated groupthink, and while the once great democratic institution of investigative reporting is reduced to Murdoch-style commercialised and “embedded” journalism, AI is slowly and silently developing much greater capacities which enable its systems and networks to control the main levers of society, from stock market trading to traffic control, from production systems to communication networks. Automation is progressing at the speed of sound, from ATMs and kiosks to drones, from the Google search engine to the Google self-driving car powered by its Google Chauffeur software. In space exploration, automation dominated from the beginning, with Yuri Gagarin becoming the world’s first spaceman thanks to an automated rocket system called Vostok 1 which carried him into orbit.

The truth is, human intelligence is not advancing in today’s post-modern culture, while artificial intelligence is. We need to recognise that there are diverging trajectories of development here. One day, friends, around mid-century, we might wake up in a society controlled almost exclusively by computer programmes, automated systems, IT elites and cyborgs, with humanity, at large, reduced to a pale shadow of itself as a declining subspecies.

Machines and AI systems are already enjoying a spectacular ascent to prominence in today’s economy, taking jobs once carried out by humans. We are entering a phase in which robots and AI systems will take over more sophisticated jobs than those on the assembly-line – receptionists, clerks, teachers, lawyers, medical assistants, legal assistants, pilots, and even so-called “expert systems” like those in the medical profession which can help doctors diagnose diseases like diabetes. In finance, a variety of jobs from loan officers to stockbrokers and traders are being computerised and automated in the coming new world of cyber finance. Finance, it seems, is increasingly reliant on AI and incredibly fast and powerful computers, based on algorithms which can analyse and execute trading deals according to mathematical models.

Clearly, what computer programmes and systems can do is now moving inexorably up a chain of sophistication and complexity. Eventually, one imagines, what can be automated, probably will be – due to the relentless competitive pressures for efficiency and efficacy which prevail in society. Research done by Oxford University predicts that 47% of the human workforce could face replacement by computers.
Let’s briefly revisit what we mean by the terms Artificial Intelligence, the digital divide and, now, the Intelligence Divide.

Apparently, Artificial Intelligence, a branch of computer science which investigates what human capacities can be replicated and performed by computer systems, was a phrase coined in 1956 by John McCarthy at MIT. This field includes such aspects as programming computers to play games against human opponents [1], robotics, developing “expert systems” and programming computers for real-time decision-making and diagnosis, understanding and translating human languages (the ability for artificial “talk” or speech), and the area of simulating neural processes in animals and humans to map and imitate how the brain works.

This is one of my favourite definitions of AI: “The study of the modelling of human mental functions by computer programs.” [2]

The digital divide refers to the gap between those who have access to Information and Communications Technologies (ICTs), especially the Internet, and those who don’t. In effect, the digital “have-nots” are those billions living in poor and deprived social conditions who don’t have the education and skills to know what to do with digital technology and internet even if they did have access to them.

The Global Information Technology Report 2012: Living in a Hyperconnected World [3], published by the World Economic Forum, found that the BRICS countries, led by China, still lag significantly behind the ICT-driven economic competitiveness.

In 2014, The Organisation for Economic Co-operation and Development (OECD), looked into the role of education in cementing the global digital divide. The organisation concluded that in many countries, large parts of the adult population have non-existent or insufficient ICT problem-solving skills. For example, they reported, “Between 30% and 50% of the adult population in Ireland, Poland and the Slovak Republic fall into this category.” [4] Yet, as advancing societies become more knowledge-intensive, a growing number of jobs require at least basic ICT skills.

But the digital divide in the world, based on both access to ICT and the education skills to know how to use it, is only the precursor of an Intelligence Divide (ID) which, ultimately, could become an even deeper social fracture than racism has been in the world. The Intelligence Divide would be the growing gap between what human intelligence can do without computer power compared to what AI systems, computer programs and AI-enabled humans, including cyborgs, achieve across a range of intelligent activities including thinking, calculating, decision-making, perceiving, communicating and organising. The divide would be measured in terms of ratios of efficiency and effectiveness for the same activity performed respectively by humans and AI systems and cyborgs in any given social context.

Whether or not a deep Intelligence Divide develops, an Intelligence Race is already underway on the economic front between humans and AI. This race is not about whether a computer can beat a human chess champion but about which jobs can be done better, and more efficiently, by machines than by humans. More and more processes can be automated and this will likely mean fewer jobs for humans in the long-run. The Intelligence Race is sure to become a defining trend of this century.
What concerns me is that our post-modernist world, dominated by the trivialisation of the mass media, the corruption of democracy and the globalisation of self-serving commercialisation, is catapulting humanity into intellectual decline at a time when AI is on the rise. This is one of the main reasons why I’ve become a neo-progressionist. Why let Artificial Intelligence progress at our expense instead of boosting all forms of progress in a wiser, more holistic approach?

The 6,000 year journey of civilisation is still in its infancy when measured in time-scales of the cosmos and the biosphere itself and this probably means humanity has nowhere near reached its full potential. Of course, I would much prefer to see human intelligence increasing, not declining, before it’s too late to stop the Intelligence Divide from taking root in the evolution of our history.

See Michael Lee’s video on YouTube “Finding Future X” at https://www.youtube.com/watch?v=XQItLRhzkMY

[1] In May, 1997, the super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.

[2] Collins English Dictionary:85.  (Harper Collins Publishers, 4th edition, 1998). Some other definitions of AI include:

“A term applied to the study and use of computers that can simulate some of the characteristics normally ascribed to human intelligence, such as learning, deduction, intuition, and self-correction. The subject encompasses many branches of computer science, including cybernetics, knowledge-based systems, natural language processing, pattern recognition, and robotics.” The Cambridge Encyclopedia 4th Edition.(CambridgeUniversity Press, 2000.) “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” The New Oxford Dictionary of English. (Oxford University Press,1998. )

[3] http://www.weforum.org/news/global-information-technology-report-highlights-emergence-new-digital-divide

[4] “Trends shaping Education 2014 Spotlight 5” by the OECD www.oecd.org/edu/ceri/trendsshapingeducation2013.htm

Codebreaking our future