By Ad astra
Have you ever felt overtaken by the velocity of world events? Have your ever felt overwhelmed by the pace of change? Have you ever wondered what the world will be like in Twenty Twenty-Four, forty years after George Orwell’s prophetic novel Nineteen Eighty-Four?
Studying the facts and contemplating what the world will be like in just seven years is alarming, such is the pace of change we see all around us. We can avoid distress by burying our heads in the sand, or we can take a clear-eyed look at the future and reflect on how best to manage it. Many choose the more comfortable option; in this piece let’s choose the latter.
This piece draws heavily on an article in Scientific American on 25 February of this year Will Democracy Survive Big Data and Artificial Intelligence, which carried the subtitle: We are in the middle of a technological upheaval that will transform the way society is organized. We must make decisions right now. The article was written by an illustrious group of authors: Dirk Helbing, Bruno S. Frey, Ger Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari and Andrej Zwitter. Their CVs are at the foot of the article.
Most of you will not wish to read the Scientific American article in full, as it is very long. To make this piece readable, I have attempted to distill the essence of it, but to portray its message accurately I have quoted much of it at length. Therefore, this is a rather long piece, but as it focuses on an issue of critical importance to our future, I have not attempted to oversimplify its content. I hope you will have time to digest it.
If you think that our society is light-years away from acting out Orwell’s fantasy, reflect on the current angry debate around clause 18c of the Racial Discrimination Act, the way in which the Department of Human Services has given the media personal details of a complainant against Centrelink in order to punish her publicly, and on the recent emergence of ‘alternative facts’ and ‘fake news’ in the US.
To remind you of the plot of Orwell’s Nineteen Eight-Four, here is the beginning of a summary:
Winston Smith is a low-ranking member of the ruling Party in London, in the nation of Oceania. Everywhere Winston goes, even his own home, the Party watches him through telescreens; everywhere he looks he sees the face of the Party’s seemingly omniscient leader, a figure known only as Big Brother.
The Party controls everything in Oceania, even the people’s history and language. Currently, the Party is forcing the implementation of an invented language called Newspeak, which attempts to prevent political rebellion by eliminating all words related to it. Even thinking rebellious thoughts is illegal. Such thought-crime is, in fact, the worst of all crimes.
The rest of the summary, provided by sparknotes can be read here.
First, some facts from the Scientific American article. Remember, some of these are predictions, and therefore may not be accurate. They may, indeed likely will, change over time.
As the digital revolution accelerates, how will it change our world? Here are some statements from the article:
The amount of data we produce doubles every year. In other words, in 2016 we produced as much data as in the entire history of humankind through 2015.
Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel.
Soon, the things around us, possibly even our clothing, also will be connected with the Internet.
It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than all the people on Earth. Then, the amount of data will double every 12 hours.
This is known in the artificial intelligence world as Big Data, a phrase we will hear more and more.
Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?
Artificial intelligence is contributing to the automation of data analysis. It is now capable of learning, thereby continuously developing itself.
Algorithms can now recognize handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos.
News content is, in part, automatically generated.
In the coming 10 to 20 years around half of today’s jobs will be threatened by algorithms.
Today, algorithms perform 70% of all financial transactions.
40% of today’s top 500 companies will have vanished in a decade.
Just reflect on that – during the next ten years, by 2027, 200 of the top 500 companies will disappear – 140 of them in the seven years to 2024!
What will replace them? What will workers in those companies do after they have gone? Will there be alternative work? If not, how will they live? Are governments planning for this eventuality? Are there any who are?
The article continues:
It can be expected that supercomputers will soon surpass human capabilities in almost all areas somewhere between 2020 and 2060.
Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft, Apple co-founder Steve Wozniak, and physicist Stephen Hawking are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons.
One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars, the automation of society is next.
With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.
In the 1940s, the American mathematician Norbert Wiener invented cybernetics. According to him, the behaviour of systems could be controlled by the means of suitable feedbacks. Very soon, some researchers imagined controlling the economy and society according to this basic principle, but the necessary technology was not available at that time.
Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula.
China is taking a similar route. Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include people’s Internet surfing and the behaviour of their social contacts.
With consumers facing increasingly frequent credit checks and some online shops experimenting with personalized prices, we are on a similar path in the West.
It is also increasingly clear that we are all in the focus of institutional surveillance. This was revealed in 2015 when details of the British secret service’s “Karma Police” program became public, showing the comprehensive screening of everyone’s Internet use.
Is Orwell’s character ‘Big Brother’ now becoming a reality for us?
Under the heading ‘Programmed society, programmed citizen’, the article goes on to describe how all this happened under our very eyes:
Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This was based on personal and metadata that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially users’ identity is protected, it can be inferred quite easily.
Today, algorithms know pretty well what we do, what we think and how we feel – possibly even better than our friends and family or even ourselves.
Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled… The more is known about us, the less likely our choices are to be free and not predetermined by others.
This is startling. It is only a small step from manipulating our buying behaviour to manipulating our political and social thinking and behaviour, just as happened to Winston in Nineteen Eighty-Four via the Thought Police.
The alarming predictions continue:
But it won’t stop there. Some software platforms are moving towards ‘persuasive computing’. In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions.
The trend goes from programming computers to programming people.
These technologies are also becoming increasingly popular in the world of politics:
Under the label of ‘Nudging’, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a ‘nudge’ – a modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right.
The magic phrase is ‘Big Nudging’, which is the combination of Big Data and Nudging.
This appears to be a sort of digital sceptre that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered ‘wise king’, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.
Can you imagine how George Brandis would use the metadata he insists he must gather to ‘protect us from harm’. The fact that he is unable to explain what metadata is leaves us exposed to the manipulations of others who do know.
‘Nudging’ is already happening here.
When Centrelink client Andie Fox wrote an opinion piece for Fairfax Media claiming Centrelink had ‘terrorised’ her while chasing her for a debt she believed she did not owe, as reported in ABC News, Fairfax published an article from the Government’s perspective, suggesting Centrelink was being ‘unfairly castigated’. In the article Ms Fox’s personal information, including her history of claiming the Family Tax Benefit and relationship circumstances was exposed. The Department of Human Services, with the approval of the Human Services Minister Alan Tudge, supplied the information. Subsequently, the Department defended its ‘right’ to expose such intimate details in defence of its position, thereby ‘nudging’ any other potential complainant to back off, or else!
There is a downside though to such ‘nudging’ behaviour.
The scientific literature shows that attempts to control opinions…are doomed to fail because of the complexity of the problem. The dynamics of the formation of opinions are full of surprises. Nobody knows how the digital magic wand, that is to say the manipulative nudging technique, should best be used. What would have been the right or wrong measure often is apparent only afterwards.
During the German swine flu epidemic in 2009, for example, everybody was encouraged to go for vaccination. However, we now know that a certain percentage of those who received the immunization were affected by an unusual disease, narcolepsy. Another example is the recent attempt of health insurance providers to encourage increased exercise by handing out smart fitness bracelets, with the aim of reducing the amount of cardiovascular disease in the population; but in the end, this might result in more hip operations.
In a complex system, such as society, an improvement in one area almost inevitably leads to deterioration in another. Thus, large-scale interventions can sometimes prove to be massive mistakes.
Criminals, terrorists and extremists will try to take control of the digital magic wand sooner or later – perhaps even without us noticing. Almost all companies and institutions have already been hacked.
A further problem arises when adequate transparency and democratic control are lacking: the erosion of the system from the inside. Governments are able to influence the outcomes. During elections, they might nudge undecided voters towards supporting them, a manipulation that would be hard to detect. Therefore, whoever controls this technology can win elections by nudging themselves to power.
In order for manipulation to stay unnoticed, it takes a so-called resonance effect, where nudging is customized to each individual, an ‘echo chamber effect’. In the end, all you might get is your own opinions reflected back at you. This causes social polarization, resulting in the formation of separate groups that no longer understand each other and find themselves increasingly at conflict with one another.
In this way, personalized information can unintentionally destroy social cohesion. This can be currently observed in American politics, where Democrats and Republicans are increasingly drifting apart, so that political compromises become almost impossible. The result is a fragmentation, possibly even a disintegration of society.
Owing to the resonance effect, a large-scale change of opinion in society can be produced only slowly and gradually. The effects occur with a time lag, but they cannot be easily undone.
It is possible, for example, that resentment against minorities or migrants get out of control; too much national sentiment can cause discrimination, extremism and conflict.
Are we not already seeing this play out before our very eyes as Hanson supporters and right wing bigots vent their spleen?
Let us suppose there was a super-intelligent machine with godlike knowledge and superhuman abilities: would we follow its instructions?
This seems possible. But if we did that, then the warnings expressed by Elon Musk, Bill Gates, Steve Wozniak, Stephen Hawking and others would have become true: computers would have taken control of the world. We must be clear that a super-intelligence could also make mistakes, lie, pursue selfish interests or be manipulated. Above all, it could not be compared with the distributed, collective intelligence of the entire population.
Let’s jump to the end of this very long piece to give you ‘the bottom line’. Here is the heavily redacted conclusion written by Yvonne Hofstetter, lawyer and artificial intelligence expert: When intelligent machines take over societal control, Orwell style!
Cybernetics is the science of information and control, regardless of whether a machine or a living organism is being controlled. Cybernetics promises: “Everything is controllable.”
For Norbert Wiener, inventor of cybernetics, the digital era would be a paradise, as the world has never produced such amount of data and information as it does today.
In the digital age, machines steer everyday life to a considerable extent already. We should, therefore, think twice before we share our personal data.
Control refers to the control of machines as well as of individuals or entire social systems like military alliances, financial markets or, pointing to the 21st century, even the electorate. Its major premise: keeping the world under surveillance to collect data. Connecting people and things to the Internet of Everything is a perfect to way to obtain the required mass data as input to cybernetic control strategies.
Wiener proposed a new scientific concept for cybernetics: the closed-loop feedback. Feedback, such as the ‘Likes’ we give, and the online comments we make, is a major concept of digitization. Does that mean digitization is the most perfect implementation of cybernetics? When we use smart devices, we are creating a ceaseless data stream disclosing our intentions, geo position or social environment. While we communicate more thoughtlessly than ever online, in the background, an ecosystem of artificial intelligence is evolving. Today, artificial intelligence is the sole technology being able to profile us and draw conclusions about our future behavior.
An automated control strategy, usually a learning machine, analyzes our actual situation and then computes a stimulus that should draw us closer to a more desirable ‘optimal’ state. Increasingly, such controllers govern our daily lives. As digital assistants they help us making decisions in the vast ocean of options and intimidating uncertainty. Even Google Search is a control strategy. When typing a keyword, a user reveals his intentions. The Google search engine, in turn, will not just present a list with best hits, but also a list of links that embodies the highest (financial) value rather for the company than for the user. Doing it that way, i.e. listing corporate offerings at the very top of the search results, Google controls the user’s next clicks. This, the European Union argues, is a misuse.
But is there any way out? Yes, if we disconnected from the cybernetic loop. Just stop responding to a digital stimulus. Cybernetics will fail if the controllable counterpart steps out of the loop. Yet, we are free to owe a response to a digital controller. However, as digitization further escalates, soon we may have no more choice. Hence, we are called on to fight for our freedom and our rights afresh during the digital era and in particular with the rise of intelligent machines.
Is that frightening enough? It ought to be. Not only are we being subsumed in the cybernetic loop where we inadvertently give the very feedback that the manipulators of our choices crave, but also we are largely unaware that we are being categorized, manipulated, ‘nudged’ and inveigled into positions not of our choosing, but those chosen by others – chosen for their own purposes, whether they be commercial, or more sinisterly, political.
Be afraid, very afraid!
Big Brother is watching you!
[textblock style=”6″]
This article was originally published on The Political Sword
For Facebook users, The Political Sword has a Facebook page:
Putting politicians and commentators to the verbal sword
[/textblock]
[textblock style=”7″]
Like what we do at The AIMN?
You’ll like it even more knowing that your donation will help us to keep up the good fight.
Chuck in a few bucks and see just how far it goes!
[/textblock]