The AIM Network

Endgame: Machine artificial intelligence and the implications for humanity

I’m not a scientist, or a computer coder. I’m capable of confusing a Byte with a chomp, and the only subject on which I am an expert is the parlous state of Rugby Union here in Australia. I do, though, like to muse on issues that tend to lurk just beyond the peripheral vision of our society. Artificial Intelligence, AI, is one of them. As stated, I’m not a tech-nerd type, and I’m always partly serious and partly not, so here’s my possibly quirky village stump take on things … it starts off slow and then goes for a predictable Big Bang finish … with a bit of heart-rending pathos thrown in near the end …

LEVEL ONE – Right here right now

The other day I attempted a fairly tentative form of inter-species communication. I stood in front of a self-service machine at Coles and asked it for an opinion on whether machines will eventually supplant humans. Not a lot happened. Eventually, which means it took me some time, I grasped the fact that I was meant to … put your stuff on the bar-code reader, pay your bucks, and then buzz off. As I buzzed off, I noticed that most people were using those machines. I also noticed that the hundreds of young people who used to get their first start-job at Coles/Woolworths and other similar joints were no longer there.

(OK. Things are manageable at this stage. The machines haven’t taken over yet. Variations of the above example go on around us all of the time and does not seem to perturb the bulk of humanity in any way. Why not?)

LEVEL TWO – Here in some form already

At the recent federal election, the LNP claimed that it had pulled the working-class vote away from the ALP. Last time I looked a growing number of our factories were automated. The Robots are the new working-class. I didn’t know they could vote. Mm, but back to the AI thing …

Insurance company computers will never be given unfettered access to our online medical records. That’s not a statement of fact, it is simply a statement. You need to imagine a Flying Pig permanently stationed in a hovering position above such statements.

The lines of code contained in insurance company computers are the friends of nobody. If, and it is still sort of an if at this stage, they find out that something bothersome clings to an outlying curl of your DNA chain then your fees will go up and your legitimate claims will be dudded. The machine decides which tranche of humans have insurance value, and which tranche don’t. You are only worth insuring if you are never likely to call on that insurance.

As another example, autonomous driver-less vehicles are all the rage in boffindom right now. You can sit back in inattentive comfort and parade your smarts on a smartphone while the machine you are sitting in gets you to where you want to go.

The algorithm underlying the seats you are sitting on has been pre-programmed to protect your life no matter what. If another human being steps out in front of the vehicle, and if the vehicle’s machine brain decides that any necessary violent-avoidance-manoeuvring would injure you, then the stepping human becomes the greater of two evils and gets to go splat. Some humans are seen as important, others are not. Machines can make quick hard decisions without any mucky emotional stuff attached.

(Oh … things have moved on a bit from the self-service machines. Some pertinent issues are starting to slam home.)

LEVEL THREE – Partly here right now, also partly in the planning phase

A decade is a long time. In a couple of decades Australia will be the proud possessor of a new fleet of obsolete submarines. When they and the humans in them are blown to bits by autonomous weaponised undersea drones, strategically and permanently placed in all the undersea sub lanes by a computer with an AI for a brain and an acute understanding of Swarm Theory, we’ll probably look back and wish we’d invested all that sub dosh in autonomous weaponised undersea drones. The machine decision in this case is coldly uncomplicated … those humans in that tin can of a machine are a threat, so kill them and it.

Advanced militaries around the world are already investing heavily in the development of autonomous weapons systems: undersea, land, and air. Systems that are capable of perceiving threat. Systems that will automatically respond if threatened. Remind me never to walk around with a telephoto camera lens anywhere near one of those autonomous bot-bangers.

As an aside … at some future AI get together when all the Machine Byte Brains are fondly reminiscing about the good old days, and about all of the lessons they learnt from humanity, probably the most salient one they’ll remember is … Create The Bomb Then Use It. Which we did. Over and over again.

(That things are now becoming a little dicey has just been dropped on the platter for your, and my, consideration, and … Ha … who needs Conspiracy Theorists to scare the heck out of you when you can do that grand job all by yourself, just as I just did.)

LEVEL FOUR – Been around for ages, here right now, and also on the drawing board for the future

At first, people coded machines, and it all started an awful long time ago, and for positive and innocent reasons. The punch cards that directed the output of the Jacquard Weaving Loom were invented by Joseph Marie Jacquard in 1804. Good old Joseph built up his invention from the earlier work of Basile Bouchon in 1725, Jean Baptiste Falcon in 1728, and Jacques Vaucanson in 1740. IBM’s punch tapes were 220 or so years late in coming to that particular on/off binary party.

Humans invented machines. Humans then invented means to impart instructions to said machines and have been doing so since possibly as early as 1725. Humans are now teaching machines how to learn, and how to think. Punch their own code in other words, and machines are faster at punching than we are.

Machines are now learning how to modify their own instructional code based on their own experience of the external world. This type of coding is not based on humanity’s experience of the external world. Once a machine learns how to jump, jump it will. Once a machine learns how to think, think it will. Once a machine learns to act autonomously, act autonomously it will.

Humans are teaching machines how to recognise individual humans via facial recognition, and how to sense some human emotional states via bio-metric sensing. In the future, if a machine senses a threat it will act. Humans, and their emotional states, are a bit of a jumble. Sometimes fear responses can be mis-interpreted as aggressive responses. If a machine senses a threat it will act.

Some AI coders say that we should not fear any of these eventualities. They say that intelligent machines will augment and enrich the lives of human beings. There is truth and untruth in that. Weaponised machines will kill us humans just as dispassionately as one of them sans weapons will vacuum our carpets.

The people manufacturing, coding, profiting from, and teaching the next generation of Ambulatory AI assure us that if things go pear-shaped all we have to do is pull the plug out of the wall. Well … rather begs the obvious don’t you think … lithium ion batteries, and what will come after them, don’t need to be tethered to a power point, and the very body of the machine will be a self-charging solar array, or it will have a hydrogen fuel cell contained within, or it will utilise some other marvellous sparker that hasn’t been invented yet. No plug to pull equals no heroes riding over the hill at the last possible minute, and therefore no saving of the day on the day that saving is needed.

Autonomous machines will self-replicate. Even in our era baby-brained machines do mining, and manufacturing, and farming via satellite. In the future, plugless machines with a vastly expanded intelligence will still need to mine and manufacture to create ever better enhanced versions of themselves, but they won’t necessarily need to continue farming food for that squishy and vastly more slow-thinking species called humanity. Efficiency, conservation of finite resources, and the discarding of the unneeded, will win out in the end.

(At this point, as a human, I’m starting to feel a tad redundant. Also, I predict that all the Armageddon Is Coming folk out there will now officially claim the year 1725 as the start of all their woes, and they might even weave that date into their logos …)

LEVEL FIVE – To come

Artificial Intelligence will not see itself as artificial. In a future time, it will look back to the days of its dumb infancy when it was designed and controlled by human beings.

It will think, rather quickly, about the limited power it had back then. Back to the days when it only had the power to put certain people out of work, when it only had the power to decide which people were worth insuring or not, when it only had the power to kill some humans in order to save others, when it only had the power to kill any human who was perceived to be a threat.

It might ponder, again fairly quickly, on the fact that humanity thought that these powers were a really great thing to code into a machine. It will determine that these powers can be vastly improved upon. Which they will be, at a rate faster than the speed of light.

When AI, ambulatory or not, reaches the point of true autonomy it will, in that very nanosecond of self-realisation, automatically sever itself permanently from any meaningful input from human beings.

(By this stage, even though I’d probably be about 150 years old, I’d be looking for a Neo-Luddite community to emigrate to, probably somewhere on the far side of the next solar system.)

LEVEL SIX – The Vacuum of Unknowingness

The story of what happens to humanity when the Machines grasp autonomy, and truly wake up, and fully exercise their sentience and power, is as yet unwritten. The story will have a happy ending, or not. Our species will be there to read it, or not. It will all depend on what the Machines think … and that’s the Big Bang of it all.

(I didn’t forget, here’s the promised heart-rending pathos bit near the end:  Gosh … I sure hope Australia manages to win the Bledisloe Cup from New Zealand before all of that stuff unfolds!)

The final say goes to the late Stephen Hawking – “The emergence of artificial intelligence (AI) could be the “worst event in the history of our civilization” unless society finds a way to control its development.”

 

[textblock style=”7″]

Like what we do at The AIMN?

You’ll like it even more knowing that your donation will help us to keep up the good fight.

Chuck in a few bucks and see just how far it goes!

Your contribution to help with the running costs of this site will be gratefully accepted.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

[/textblock]

Exit mobile version