Technological Singularity – is a hypothetical point in the future when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization (Wikipedia). Many envisage this time as one that will be similar to those depicted in the terminator movies (sadly this author didn’t see any of those movies and this may partly be responsible for some of the optimism he bears about the future).
In that point in time, we will have robot police. Robot teachers. Robot citizens. Visible robots. Invisible robots. All kinds. But most will have highly specialised functions and most will not pass across as robots.
However, there are two major possible scenarios on offer and several variations of these – think of it as spectrum of possibilities.
One major scenario is that machines take over, enslave people and rule over them, making humans do as they – the machines – want/like.
Another is that intelligent machines will take away the need for humans to perform repetitive (e.g. bookkeeping, medical diagnosis, teaching etc.), dangerous (underground mining, maintaining nuclear reactors in nuclear power stations), hard (road and bridge constructions, farming etc.) tasks – freeing humans up for more mentally tasking duties and a lot of leisure.
Truth is that no one knows for sure and my best guess is that we will arrive at a mixture or combined variation of the two the scenarios above.
We will have automation at scale. Entire industries will disappear.
And some of the robots will not be visible – think of search engines today, the entire process of fetching the answers we ask is invisible to us (we don’t see the questions flying into ether, crunched by large robotic arms bearing information sledge hammers, answers assembled on an assembly line full of sensors and actuators – testing the quality of the answers we are getting and making sure each person using the search engine, gets answers that are relevant to them), but that is somewhat actually what really happens in the background, but all we just see are the answers to our questions. In fact, in future, search results will be presented before we think up the search questions and we may not necessarily need contraptions like our phones and computers to do the most basic to the advance searches as we will have mind-machine interfaces.
At singularity, we won’t need to do a search at all (as we would either not be required to do any more work and as such no thinking, or the processes will be so streamlined, answers will just appear in-process to questions as they arise – think of it as auto-transmission in vehicles today (but that don’t even begin to explain it).
Think of some of the most creative activity today, and there will be a mix of robots and people for that. Though, the general consensus for now is that when it comes to creative processes, people will still be relevant. But the other question is will creative processes bear any resemblance to what we are used to and exposed to today? For example, why would anybody need to be convinced they need something? When they will have a powerful recommender engine on hand to recommend the specific things they need to them?
Why does anyone have to find a new, more efficient way of doing something when a robot can be or has been built and tasked with that duty?
The path to singularity is uncharted. But most proponents look at advances in technology – especially those of AI, ML and large scale data processing and decide rather logically that at some point, the machines we are creating will advance so much, they would no longer need humans.
- What are the chances of us ever reaching singularity?
- If Singularity happens, what will humans resort to?
Whilst, I have no comforting answers for both questions above and I feel strongly that we may never see machine dominance – at least not above human capacity, for reasons I will state below, it is important to note that those reasons will lead to massive outsourcing of a lot of human functions to machines seen and unseen.
We are a long way off from Singularity – even if we consider some of the postulations that we will get there by 2050, a time when most of those reading this will be retired or nearly retired. As such, whilst we may not be primary targets of singularity, we have a role in preparing the coming generations for this possible event.
Why machines may never replace humans
- Ethics – governments, researchers and other power stakeholders recognises the dangers of run-away technologies and are advocating strong ethics to prevent in some cases the creation of some super intelligent machine until we have the skills and capacity to manage such a machine and in other cases simply advocating that those who should be allowed to deal with these machines (their design, development, tests and deployments) must be grounded and have firm believes in humans more than machines and focus more and more on ensuring only technologies required to advance human causes are created and those with the potential of going dark or causing havoc shouldn’t.
- Advances to date is highly specialised – think of the most advanced technology or technology system you know today. These systems can only do one thing very well and almost nothing else. Alexa, siri, google duplex in all their brilliance cannot drive, cook and perform surgery all at the same time. Neither can board game winning computers do much of anything else outside of the playing board games (https://www.theguardian.com/books/2016/mar/19/computers-board-games-take-over-world and https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/ )
- Technology often always bears the bias of their creators leaving a lot of room for alternative opinions and improvement opportunities
So, for as long as we do not end up with a global overlord in human form, we are likely never to develop machines that will themselves become overlords. Rather, there will be continued improvements to available technologies and civil debates and ethical considerations will always have a place of importance in how technology evolves.
If you look at the foregoing, humans will be at the centre of how machines evolve and therefore decide which of the two scenarios above will hold. But can we as humans be trusted?
Man has been known to seek leverage and to use this for reasons that cannot be described as altruistic.
Imagine something as ‘sacred’ as religion. Faith in a supreme being. Faith in God. Yet, man still finds a way to take advantage of others’ search for meaning and an explanation for why we humans are here. As recently as the 60s, the world watched in awe as the likes of Father Divine (https://www.britannica.com/biography/Father-Divine) a charismatic leader who whilst seen in recent literature as more of a social reformer than a cult leader, inadvertently spawned the likes of Jim Jones who not only took advantage of their followers in as much ways as Father Divine – making them slaves, taking all their monies in schemes masqueraded as giving to a higher cause – and taking this a few notches higher by killing followers in what is perhaps still regarded as the biggest religious massacre in history (https://www.theguardian.com/world/2018/nov/17/an-apocalyptic-cult-900-dead-remembering-the-jonestown-massacre-40-years-on) . Even today, some can safely argue that we are still witnessing a religious industrial complex, albeit a bit more civilised but with the same fundamental underpinnings – taking advantage of one’s leverage to make the most out of followers who none the better.
Same argument can be applied in explaining the side effects of brute capitalism. Where plantation owners have always exploited the plantation workers. And in today’s lingo, Wall street is immune to the effect of their actions (https://www.theguardian.com/commentisfree/2013/jan/23/untouchables-wall-street-prosecutions-obama) whilst the rest of humanity take the full brunt of these activities – case in point is the 2008 financial crisis which most still do not understand today, but felt and bore the full brunt of, whilst the institutions at whose feet the blames were placed got bailouts from governments.
And with factory farming, humans (again those with capital power) may have lowered the collective ethics of all of us as race as we either do not know how much torture our food goes through before landing on our tables nor do we car enough (no, I am not animal rights advocate, I am only exploring some of the ways we have let our guards down r accepted the status quo).
So, some the fears being expressed about large scale adoption of technology may not be unfounded and we as a race need to start to think and talk about these and start to take actions to ensure the apocalypse never happens or delay it as much as possible perhaps until such a time when humans can cope with the possible challenges these will unleash. And above all, we need to put our needs as a collective – the human race – above those of individuals and individual accomplishments. But the question is, are we built for that? Or can we really put aside our individuality, rather working together for the collective good?
How will brands be affected? Irrespective of the variation of Singularity we eventually get to or on the journey to
From a customer point of view:
- Customers will not rely on commercials and Ads as they do today to be convinced on which of the options before them they should choose – they will always have access to the facts and are more likely to be swayed by that rather than by the cool ?
- Some of the communication interfaces we know today will disappear – for example it is already possible for consumers in certain markets to programme their household equipment to place orders for consumables as they are depleted (see LG’s smart Fridges and Amazon Dash)
- Brands will have the opportunity to know more about their customers than ever before – facebook targeting will become a micky mouse compared to what will happen in future.
- Brands may cut out the middle man (the agencies and all players on the strategic marketing value chain) – leveraging the immediacy of technology to target customers and convert them instantly (you see an Ad for your preferred orange juice on your smart fridge just as you realise you are out of orange juice and with a single click you can order the juice for same day delivery)
- Super Intelligence: Paths, Dangers, Strategies By Nic Bostrom (a synopsis of the book exists on Wikipedia: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)
- The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson and Andrew McAfee (a summary is available here : https://medium.com/of-all-things-tech-progress/summary-of-the-second-machine-age-28f5ad99c7bb and you may purchase a copy here: https://www.amazon.com/Second-Machine-Age-Prosperity-Technologies/dp/0393350649)
- The work of the Future of Life Institute: https://futureoflife.org/
- Ray Kurzweil, father of the singularity, on brand trust, how AI can help advertisers & technology aiding human evolution https://www.thedrum.com/news/2017/07/05/ray-kurzweil-father-the-singularity-brand-trust-how-ai-can-help-advertisers
- Amazon Dash – https://www.amazon.com/b?ie=UTF8&node=17729534011
- Samsung Smart Fridges – https://www.samsung.com/us/explore/family-hub-refrigerator/overview/