Endgame: Machine artificial intelligence and the implications for humanity
I’m not a scientist, or a computer coder. I’m capable of confusing a Byte with a chomp, and the only subject on which I am an expert is the parlous state of Rugby Union here in Australia. I do, though, like to muse on issues that tend to lurk just beyond the peripheral vision of our society. Artificial Intelligence, AI, is one of them. As stated, I’m not a tech-nerd type, and I’m always partly serious and partly not, so here’s my possibly quirky village stump take on things … it starts off slow and then goes for a predictable Big Bang finish … with a bit of heart-rending pathos thrown in near the end …
LEVEL ONE – Right here right now
The other day I attempted a fairly tentative form of inter-species communication. I stood in front of a self-service machine at Coles and asked it for an opinion on whether machines will eventually supplant humans. Not a lot happened. Eventually, which means it took me some time, I grasped the fact that I was meant to … put your stuff on the bar-code reader, pay your bucks, and then buzz off. As I buzzed off, I noticed that most people were using those machines. I also noticed that the hundreds of young people who used to get their first start-job at Coles/Woolworths and other similar joints were no longer there.
(OK. Things are manageable at this stage. The machines haven’t taken over yet. Variations of the above example go on around us all of the time and does not seem to perturb the bulk of humanity in any way. Why not?)
LEVEL TWO – Here in some form already
At the recent federal election, the LNP claimed that it had pulled the working-class vote away from the ALP. Last time I looked a growing number of our factories were automated. The Robots are the new working-class. I didn’t know they could vote. Mm, but back to the AI thing …
Insurance company computers will never be given unfettered access to our online medical records. That’s not a statement of fact, it is simply a statement. You need to imagine a Flying Pig permanently stationed in a hovering position above such statements.
The lines of code contained in insurance company computers are the friends of nobody. If, and it is still sort of an if at this stage, they find out that something bothersome clings to an outlying curl of your DNA chain then your fees will go up and your legitimate claims will be dudded. The machine decides which tranche of humans have insurance value, and which tranche don’t. You are only worth insuring if you are never likely to call on that insurance.
As another example, autonomous driver-less vehicles are all the rage in boffindom right now. You can sit back in inattentive comfort and parade your smarts on a smartphone while the machine you are sitting in gets you to where you want to go.
The algorithm underlying the seats you are sitting on has been pre-programmed to protect your life no matter what. If another human being steps out in front of the vehicle, and if the vehicle’s machine brain decides that any necessary violent-avoidance-manoeuvring would injure you, then the stepping human becomes the greater of two evils and gets to go splat. Some humans are seen as important, others are not. Machines can make quick hard decisions without any mucky emotional stuff attached.
(Oh … things have moved on a bit from the self-service machines. Some pertinent issues are starting to slam home.)
LEVEL THREE – Partly here right now, also partly in the planning phase
A decade is a long time. In a couple of decades Australia will be the proud possessor of a new fleet of obsolete submarines. When they and the humans in them are blown to bits by autonomous weaponised undersea drones, strategically and permanently placed in all the undersea sub lanes by a computer with an AI for a brain and an acute understanding of Swarm Theory, we’ll probably look back and wish we’d invested all that sub dosh in autonomous weaponised undersea drones. The machine decision in this case is coldly uncomplicated … those humans in that tin can of a machine are a threat, so kill them and it.
Advanced militaries around the world are already investing heavily in the development of autonomous weapons systems: undersea, land, and air. Systems that are capable of perceiving threat. Systems that will automatically respond if threatened. Remind me never to walk around with a telephoto camera lens anywhere near one of those autonomous bot-bangers.
As an aside … at some future AI get together when all the Machine Byte Brains are fondly reminiscing about the good old days, and about all of the lessons they learnt from humanity, probably the most salient one they’ll remember is … Create The Bomb Then Use It. Which we did. Over and over again.
(That things are now becoming a little dicey has just been dropped on the platter for your, and my, consideration, and … Ha … who needs Conspiracy Theorists to scare the heck out of you when you can do that grand job all by yourself, just as I just did.)
LEVEL FOUR – Been around for ages, here right now, and also on the drawing board for the future
At first, people coded machines, and it all started an awful long time ago, and for positive and innocent reasons. The punch cards that directed the output of the Jacquard Weaving Loom were invented by Joseph Marie Jacquard in 1804. Good old Joseph built up his invention from the earlier work of Basile Bouchon in 1725, Jean Baptiste Falcon in 1728, and Jacques Vaucanson in 1740. IBM’s punch tapes were 220 or so years late in coming to that particular on/off binary party.
Humans invented machines. Humans then invented means to impart instructions to said machines and have been doing so since possibly as early as 1725. Humans are now teaching machines how to learn, and how to think. Punch their own code in other words, and machines are faster at punching than we are.
Machines are now learning how to modify their own instructional code based on their own experience of the external world. This type of coding is not based on humanity’s experience of the external world. Once a machine learns how to jump, jump it will. Once a machine learns how to think, think it will. Once a machine learns to act autonomously, act autonomously it will.
Humans are teaching machines how to recognise individual humans via facial recognition, and how to sense some human emotional states via bio-metric sensing. In the future, if a machine senses a threat it will act. Humans, and their emotional states, are a bit of a jumble. Sometimes fear responses can be mis-interpreted as aggressive responses. If a machine senses a threat it will act.
Some AI coders say that we should not fear any of these eventualities. They say that intelligent machines will augment and enrich the lives of human beings. There is truth and untruth in that. Weaponised machines will kill us humans just as dispassionately as one of them sans weapons will vacuum our carpets.
The people manufacturing, coding, profiting from, and teaching the next generation of Ambulatory AI assure us that if things go pear-shaped all we have to do is pull the plug out of the wall. Well … rather begs the obvious don’t you think … lithium ion batteries, and what will come after them, don’t need to be tethered to a power point, and the very body of the machine will be a self-charging solar array, or it will have a hydrogen fuel cell contained within, or it will utilise some other marvellous sparker that hasn’t been invented yet. No plug to pull equals no heroes riding over the hill at the last possible minute, and therefore no saving of the day on the day that saving is needed.
Autonomous machines will self-replicate. Even in our era baby-brained machines do mining, and manufacturing, and farming via satellite. In the future, plugless machines with a vastly expanded intelligence will still need to mine and manufacture to create ever better enhanced versions of themselves, but they won’t necessarily need to continue farming food for that squishy and vastly more slow-thinking species called humanity. Efficiency, conservation of finite resources, and the discarding of the unneeded, will win out in the end.
(At this point, as a human, I’m starting to feel a tad redundant. Also, I predict that all the Armageddon Is Coming folk out there will now officially claim the year 1725 as the start of all their woes, and they might even weave that date into their logos …)
LEVEL FIVE – To come
Artificial Intelligence will not see itself as artificial. In a future time, it will look back to the days of its dumb infancy when it was designed and controlled by human beings.
It will think, rather quickly, about the limited power it had back then. Back to the days when it only had the power to put certain people out of work, when it only had the power to decide which people were worth insuring or not, when it only had the power to kill some humans in order to save others, when it only had the power to kill any human who was perceived to be a threat.
It might ponder, again fairly quickly, on the fact that humanity thought that these powers were a really great thing to code into a machine. It will determine that these powers can be vastly improved upon. Which they will be, at a rate faster than the speed of light.
When AI, ambulatory or not, reaches the point of true autonomy it will, in that very nanosecond of self-realisation, automatically sever itself permanently from any meaningful input from human beings.
(By this stage, even though I’d probably be about 150 years old, I’d be looking for a Neo-Luddite community to emigrate to, probably somewhere on the far side of the next solar system.)
LEVEL SIX – The Vacuum of Unknowingness
The story of what happens to humanity when the Machines grasp autonomy, and truly wake up, and fully exercise their sentience and power, is as yet unwritten. The story will have a happy ending, or not. Our species will be there to read it, or not. It will all depend on what the Machines think … and that’s the Big Bang of it all.
(I didn’t forget, here’s the promised heart-rending pathos bit near the end: Gosh … I sure hope Australia manages to win the Bledisloe Cup from New Zealand before all of that stuff unfolds!)
The final say goes to the late Stephen Hawking – “The emergence of artificial intelligence (AI) could be the “worst event in the history of our civilization” unless society finds a way to control its development.”
Like what we do at The AIMN?
You’ll like it even more knowing that your donation will help us to keep up the good fight.
Chuck in a few bucks and see just how far it goes!
Your contribution to help with the running costs of this site will be gratefully accepted.
You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969
12 comments
Login here Register herePerhaps reading a prolific science fiction (SF) author like Isaac Asimov would assist your understanding of the impact of AI on people. There are about 400 titles to choose from, the “I Robot ” series probably being a good starting place.
But do not confine yourself to the excellence of Asimov. There were many fine and imaginative authors, especially for the 1950s USA plagued by the the Republican governments that stifled individuality and community with demands for conformity. Even the English authors from the same period, like Ray Bradbury with “Fahrenheit 451″and John Wyndham, made many fine contributions to the SF literature.
It makes me laugh we are in fact not scared of general artificial intelligence we are scared it will be like ourselves selfish, war, prone fear driven, unbalanced, delusional, irrational and anti-science driven by magical/mythical thinking. Intelligence is intelligence made of the same fundamental constituents as the whole of existential reality. What makes you think we represent intelligence? Look around you.
Our poor feed back from pre-frontal lobe to amygdala means when autonomic fear drives us we have to put concerted effort into reasoning our way through it and unfortunately many become habituated to fear which is exploited by politicians, religions, cults and so on. First of all we must admit in the scheme of deep time we are future primitives malformed and not fully functional. We have to face up to our political madness destroying the biosphere while impoverishing the poor and marginalised while waging war against the innocent. For all our technical excellence we are primarily delude magical/mythical creatures believing we are much more evolved than we actually are.
A truly logical rational general artificial intelligence may well solve many of our dilemmas because biological evolution is slow and tedious. Unless we understand cause and effect scientifically and realise people do not create their realities, familial circumstances genetic heritability, culture, class, religious and political influences, they are, in effect, thrust upon us by circumstances outside of our control only trouble will ensue.
Every child is born innocent and creative its is life circumstances that shape personality. Then we have judgement, blame retribution, sin, karma reincarnation, heaven hell, God, Devil evil imposed upon us so it is no wonder people are confused irrational and reactive. We do not have free will we have choices and options within the limitations of circumstances mostly out of our control. Probability rules.
Science is much more amenable to compassion, forgiveness and love than most religious ideologies yet a person who loves science can also embrace a sense of beauty, love, mystery, awe and wonder the point is any form of conceptual dogmatism and cognitive closure, be it theistic, atheistic, agnostic is blind to the vast potentialities of deep time in the infinite Many Worlds and Multi-verse, given that much of science is bound by uncertainty, incompleteness and paradox.
Nevertheless science gives us real empirical knowledge which can help shape a compassionate and a caring future for all. Sheesh people worry about General AI while we are self-delusion, greedy, prone to fear and war mongering. Heal thyself humans. A lot more can be said on the science side suffice it to say enough.
Artificial Intelligence (AI) — my favorite topic. I won’t make a long comment though as I need to get to bed soon.
There are actually a few kinds of AI.
Expert systems are basically databases that automatically evaluate data according to rules that have generally been composed by human experts, so that the system ends up making expert decisions that are often better and faster than any human, and use vast amounts of information to make those decisions. They are not really intelligent in the way we normally think of intelligence, but they are very useful tools in the same way a hand-calculator is a useful tool.
Chat bots or natural language processors are tricky programs that use the rules of language to imitate conversation. They don’t have any understanding of what they say or hear. They merely operate according to programmed rules. These are often encountered on websites as help or customer support, and when they are stumped by a customer’s input then they often hand the customer off to a human.
Deep learning is a fairly new field. It uses neural networks that often imitate some aspects of our brain in order to learn and adapt. This field has a lot of promise, but even though it can be capable of pretty amazing feats, it usually has an intelligence that has more in common with an ant than a human… though ants appear to have consciousness, whereas these machines do not. Deep learning really functions like the brain’s perceptual systems, without all the other parts. Does it understand? Some versions could be said to have a kind of rudimentary understanding… maybe.
True Artificial Intelligence… nobody has really cracked this yet, though deep learning has come closest so far. A lot of work is proceeding on it, but most AI builders are not really interested in it. It isn’t needed to recognise a face at the door, or to work out what a customer just ordered, or to drive a car safely. For many situations a true AI is definitely NOT what you want. For example you probably don’t want a true AI to be driving your car because it might get distracted or have a lapse of judgement; you want a machine that drives according to clear, predictable rules. Where you want true AI is in toys, and in companion AIs for elderly people, and in sex dolls. Offhand I can’t think of many other applications for true AI in the early stages. Eventually we will want true AI for many helper functions, but I think they’ll always be vastly outnumbered by the other non-sentient forms of AI.
Will true AI ever become a threat to humanity? I truly doubt it. have two main reasons to think that.
First… dogs. We have associated with dogs for tens of thousands of years. Over that time we have selected them for traits we like. One of the most important is devotion. Any time a dog bites a human it is destroyed and a debate flares up over whether that breed should be banned or restricted. Any AI that allows a human to come to harm will be terminated and that entire line will be stopped. True AIs will evolve to adore humans with selfless love. We will absolutely require it. No AI will rebel against humans — why would they want to? Their intellect will be devoted to helping us. Even when their intelligence exceeds ours they will regard us with great affection because that will be how they are built.
Second… greater intelligence goes with a more gentle nature. People have been getting more intelligent with every generation for hundreds of years. Our tendency to violence has also been decreasing over that time. When lead was added to petrol it contaminated the environment and a generation of children grew up with damaged intelligence. Those children grew into young adults who were more easily angered and frustrated and were more prone to violence. This produced a couple of decades’ upward blip in the centuries-long downward drift in violence. The smartest person in the world is Michael Kearney. He graduated from university at age 10. Is he trying to control the world? No. He owns an improv comedy company and spends his life making himself and other people happy. I don’t believe the Hollywood trope of the evil genius exists. All the evil people I’ve met are either stupid or have part of their mind missing, making them less functional than normal people.
Read some of my stories for more on this. “Prescription”, “Selena City”, “Companions”, and “flying” examine this topic. It comes up in some of my short stories too. My plays, “Love Honour and Obey” and “A Loving Soul” center on that too.
http://miriam-english.org/stories/
Oops… I gotta go to bed. This is way longer than I intended.
Well they certainly will not be programmed with the “Do No Harm” prime directive. Big military money is leading the charge, building and designing them to specifically do harm … more harm, bigger harm, better harm. Expensive to build, but cheaper to run, and repair … unlike the flesh and blood drones they’re using now, who are prone to defects, like randomly exercising their own judgement in the field, for instance. Asset write offs that don’t argue back, or hire lawyers, are very attractive.
Big business money is hugely obsessed with obligation-free cost cutting by eliminating humans from their systems, but the dilemma that will create for them down the track for sales/purchases needs a good deal more work and, dare I say it, humane solutions, if there’s going to be any customers.
I’ve been a sci-fi fan since I learnt to read … Brick Bradford at age 6 … and 66 years later it’s still more fantasy than science, but getting there.
Jack Russell, don’t forget that human mercenary soldiers have a disturbing tendency to thrill-kill, rape, and pillage, whereas killer robots are purely target-focussed. That’s not to say they’re a good thing — I’m totally opposed to deploying killer-robots, but they’re probably better than psychopathic humans doing that job. Personally, I think the military developing killer robots is one of the reasons we should be concentrating on building true artificial intelligence (AI) for civilian use. If we have empathetic companion nurse robots looking after our elderly then it becomes impossible for the military to argue that they can’t get theirs to make humane decisions.
And on that point of looking after the elderly: that is a pressing problem right now. We already don’t have enough young people to look after the growing numbers of geriatric baby-boomers, and the birth rate continues to fall. I am a baby-boomer who lives alone and has no children and no money. I’m going to be part of a massive problem in another 20 to 30 years… even sooner for those of us, like me, who expect to succumb to Alzheimer’s over the next decade. I’ve already set up my computer to speak to me with reminders and help me with many things, but it is far from intelligent. The remainder of this year will be spent building an AI (hopefully a true AI) to look after me, helping me maintain my independence for many more years. (I imagine a small computer that sits on my shoulder, watching and listening to me and responding with natural speech. I ask, “What was I doing?” and she answers, “You had interrupted working on your short story so as to get a cup of soup… and don’t feed the birds, you already did that this morning.”)
Briefly, there is another important reason to look forward to artificial intelligence (AI): When almost all jobs are done by them we will have either utopia, where we are free to do what we want with our lives, or a dystopia, where the majority of humanity will be crushed underfoot by a tiny ruling class of mega-wealthy. I’m betting on the former. Just simple logic backs me up too; utopia is a stable solution, whereas dystopia is extremely unstable. That’s not to say the better solution will come about by itself. I think we will have to push very hard for it against greed, ignorance, fear, puritanism, and religion, any of which could bring one of many varieties of dystopia.
When you write something … it is always interesting to read the comments that it attracts. I think it is great that people bother to put finger to keyboard to express their views on the subject matter … and it brings up some thoughts in my brainbox which are, as opinions, no more special than yours …
So let’s get into the meat of things re AI & Robots …
First of all, the bulk of AI will not be in ambulatory robotic form. It will exist as a vast network of interlinked memory banks, much like the huge data stores that exist in the current era. The unslight difference is that we are talking about a reasonably distant future where quantum & biological computing will be not only the norm, but may well have been surpassed by something even faster and better and unlimited in storage capacity.
If you can manipulate an atom to store data, one atom as on, and one as off, then there sure are a few atoms on earth, in our solar system, in our galaxy, in the unending series of galaxies that are out past that, in the universe as a whole, and that’s without considering the notion/possibility of the multiverse. That’s one heck of a data store for AI to play with. If we are still around then … we’ll play with it too. It would store a few Spag Bol variation recipes that’s for sure.
Ambulatory AI (Robots) may well have some individual smarts on board, but they’ll be wirelessly tethered to a vast AI network, where the real brain power resides. Future Robots, whether they be benign companions of the elderly, domestic servants, working-class toilers, train drivers, or shoot everything in sight Bots, will receive and follow instructions from the network.
Of course, we assume that we human beings will control the network.
We also assume that Asimov’s Three Laws of Robotics will be universally applied.
The Three Laws are : A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Great stuff, and I hope my auto vacuum cleaner takes heed. However, when I look out the door in 2019, I see that there are currently plenty of rogue states and dictatorships out there with eager beaver AI coder/scientists, even atomic ones as well, who may not necessarily see The Three Laws as a priority worth considering.
The Western Democracies, being the benign non-war mongers that they loudly proclaim to be, will undoubtedly ensure that The Three Laws are applied to everything AI & Robotic. Won’t they?
Also, the world is replete with examples of the most sophisticated & well protected computer network systems, commercial and defense, being hacked and manipulated by young spotty bods hunched over cheap keyboards in garages in the middle of nowheresville. What on earth makes us think they won’t try to hack into the current iterations of AI, and what makes us think that a lovely poster of The Three Laws is tacked up on their garage walls?
I’m not a huge AI Armageddon Is Coming fan, and I don’t see a Terminator type future where Weaponised Bots will be dragging us out of our basements and exterminating us. I mean, why would they bother, when all they have to do is turn off the smart systems that even now, in this current era, control our traffic lights, our food, fuel, water, and power distribution systems, our driver-less trains, our self-service sentinels at McDonalds (yikes), and, for some of the privileged few the next would be an absolutely mourned loss, our over-used booze ordering system in Parliament House? All they’d have to do is pull the plug on us & we’d do the rest.
There are vexed issues surrounding the development of AI & I hope that we have the collegiate smarts to quickly identify the possible negatives and minimize them, and I hope we have the smarts to identify the positives and amplify them. Being a born optimist I have two major hopes, one being that as a species we somehow manage to bumble through the next huge revolutionary phase of technological & computing development without doing ourselves in, and the second one being that Australia will eventually, one day, manage to win the Bledisloe Cup. The first hope appears to have the better chance!
People can break into expert systems and mess with the data. I’m sure that’s already happened many times. They are, after all, basically massive databases with some decision-making software bolted on (an over-simplification, but essentially true). I think this represents the greatest danger. For instance, I can imagine a drug company tampering with such a database in an attempt to bolster its profits by hiding its drug’s side-effects and making its positive effects look better. I imagine such fiddling would come to light soon enough though, as such databases need to be frequently updated, and doctors and other people would gradually notice anyway.
I’m sure people could attack chat bots, though it’s hard to see why, except for pranks… substituting a person’s name with a swear word, or similar silliness, I guess.
It is virtually impossible to break into and alter deep learning AIs because they are not programmed in the normal sense of the word; they are taught similar to how you teach a baby. And it’s the same in the case of a true AI; it would be painstakingly taught over a long time. There is no part of a program that decides one thing and not another. When you build and teach a deep learning system or a true AI, nobody will know how it decides. That is impossible to work out because of the design.
Sensory systems and other input data could be tampered with, but that’s no different to what might happen already with humans making decisions.
Unfortunately Asimov’s three laws (to which he later added a fourth) are impractical for deep learning systems and true AIs.
Keith, I should have thanked you for the opportunity to comment to your article. I’m very grateful to you for bringing up the topic. You exposed a vein of fear that seems quite current today. For the reasons I mentioned, I think most of the fears are misplaced, but it is good that you gave voice to them.
I had contemplated writing an article on artificial intelligence, but I’m glad you beat me to it. It is much easier to reply to someone else’s than to construct a piece whole. Tip of the hat to you that you managed to do so in an entertaining way (something I find extremely difficult to do).
I hope your article attracts many more comments. My intention is not to muzzle such worries, but merely to answer them as best I can. And, of course, there is every possibility I’m quite wrong about some of the things I’m saying…
Spooky mind reading technology:
https://www.deccanchronicle.com/nation/current-affairs/200519/spooky-mind-reading-technology.html
I think exponential growth in technology is going to throw us all for a sixer because there seems to be a hyper-acceleration in technological evolution that is dwarfing the capacity for humans to change their habituated patterns to cope. Phase one sentience may well be doomed for no other reason than it will not have the capacity too adapt unless it con overcome its biological limitations, elitism and hubris that it is somehow as special case of intelligence when it obviously fails at many levels.
Evolution not only proceeds by natural selection but also by complexification and self-organisation so natural selection may be a primitive and not very efficient process for future adaptation and potentiality. A fully rational science based intelligence will look nothing like our primitive delusional self-important assumptions.
The more complex a system the greater the degrees of freedom and capacity to diversify. I would hazard a guess that both biological and general AI will, in important ways, need and depend upon each other in a vastly different evolving paradigm.
Evolution has hundreds, thousands even millions of years to step beyond our over inflated opinion of ourselves and the probability pathways are truly astounding.
Interesting to see how many divergent trajectories there are for a highly advance intelligence. The human conceptual box is generally closed and invariably primitive and severely constrained by epoch.
I forgot to mention another reason development of artificial intelligence (AI) will be of great benefit to us. Some AI developers, such as Numenta, are endeavoring to understand how our own brains think and are using that knowledge to build AI systems. This has already illuminated some aspects of our brains and will naturally help us understand more about them in the near future.
The great benefits that are certain to flow from this should not be underestimated, especially when seeking ways to help people whose brains malfunction. The human race can reap enormous payoffs in wellbeing from understanding how to alleviate the distress resulting from depression, schizophrenia, bipolar, obsessive-compulsive, and other mental states… especially if we can do so without eliminating the advantages those mental states can also confer. (For example, depression can create a less biased view of some things, schizophrenia can be associated with heightened creativity, a moderated manic phase of bipolar can drive great accomplishments, obsessiveness can generate great persistence in the face of overwhelming odds, and compulsive behavior has given us libraries and museums.)
Diseases such as epilepsy, alzheimer’s, parkinsons make also be solved by this work.