Convenient Villains: Kathleen Folbigg’s Miscarriage of Justice

They – being the howling press, the screeching vox populi, and anybody…

Deserving Each Other: The PGA Tour-LIV Golf Merger

Described as acrimonious, divisive and disruptive to golf, the LIV Golf Tournament,…

Phil Lowe Builds A Cubby House!

Some of you may remember a photo of Scott Morrison building a…

Diluted Sovereignty: A Very Australian Example

Australian concepts of sovereignty have always been qualified. First came the British…

Whither Constitutional Change?

Within a very short space of time, we are going to be…

The bottom feeders

There are a number of species in the animal world that survive…

National Museum of Australia launches environmental sustainability action…

National Museum of Australia Media Release The National Museum of Australia has launched…

Ticketing Woes: The Patchy Record of Myki

What is it about government contracts that produces the worst results and…


Anxiety as Socialism: AI Moratorium Fantasies

Rumours and streaks of hysteria are running rife about what such artificial intelligence (AI) systems as ChatGPT are meant to do. Connecticut Senator Chris Murphy recently showed himself to be ignorant with terror about the search bot created by OpenAI. “ChatGPT taught itself to do advanced chemistry. It wasn’t built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked. Something is coming. We aren’t ready.”

Melanie Mitchell, an academic who knows a thing or two about the field, was bemused and tweeted as much. “Senator, I’m an AI researcher. Your description of ChatGPT is dangerously misinformed. Every sentence is incorrect. I hope you will learn more about how this system actually works, how it was trained, and what its limitations are.”

Murphy retorted indignantly that he had not meant what he said. Of course I know that AI doesn’t ‘learn’ or ‘teach itself’ like a human. I’m using shorthand.” Those criticisms, he argued, had the intention of bullying “policymakers away from regulating new technology by ridiculing us when we don’t use the terms the industry uses.”

Like birds of a feather, Murphy’s intervention came along with the Future of Life Institute’s own contribution in the form of an open letter (the Letter). The document makes a number of assertions expected from an institute that has warned about the risks of supremely intelligent AI systems. Literally thousands digitally flocked to lend their names to it, including tech luminaries such as Elon Musk (a warning there), and Apple co-founder Steve Wozniak. (Currently, the number of signatures lies at 27,567.)

The letter makes the plea that a six-month moratorium is necessary for humanity to take stock about the implications of AI. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization. Such decisions must not be delegated to unelected tech leaders.” Emphatically, it continues: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The Letter is unimpressive, clumsy, and clear in its effort to manufacture anxiety. While there is much to be said about having considered debates on the way AI is developing, one must ask where this plea is coming from. When billionaires demand a halt in technological practice, scepticism should start tickling the conscience. Suddenly, such voices demand transparency, accountability and openness, the very things they have shunned through their money-making endeavours. And who are the unelected tech leaders in any case?

As for the level of anxiety, the powerful and wealthy will always have bundles of it. If there is one commodity they truly want to share with the rest of us – call it anxiety as socialism – it’s their own fears writ large and disseminated as our fears. AI is that perfect conduit, a case of both promise and terror, therefore needing strict control. “The only things that can oppress US billionaires,” muses the Indian journalist and writer Manu Joseph, “are disease, insurrection, aliens and paranormal machines, the reason they tend to develop exaggeration [sic] notions of their dangers.”

For Mitchell, the authors and backers had embraced an all too gloomy predicament of humanity in the face of AI. “Humans,” she wrote earlier this month, “are continually at risk of over-anthropomorphizing over-trusting these systems, attributing agency to them when none is there.”

The useful premise for the unnerved fearmongers yields two corollaries: the attempt to try to halt the changing nature of such systems in the face of innovation; and the selling factor. “Public fear of AI is actually useful for the tech companies selling it, since the flip-side of the fear is the belief that these systems are truly powerful and big companies would be foolish not to adopt them.”

Moratoria in the field of technology tend to be doomed ventures. The human desire to invent even the most cataclysmically foolish of devices, is the stuff of Promethean legend. Consider, for instance, the debate on whether the US should develop a weapon even more destructive than the atomic bomb. The fear, then, was that the godless Soviets might acquire a superbomb, a muscular monster based on fusion, rather than fission.

In the seminal document received by US President Harry Truman on April 14, 1950, fears of such a discovery are rife. Written by Paul Nitze of the US State Department’s Policy Planning Office, it warned “that the probable fission bomb capability and the possible thermonuclear bomb capability of the Soviet Union have greatly intensified the Soviet threat to the security of the United States.” The result of such a fear became the hydrogen bomb.

The more level-headed pragmatists in the field acknowledge, as do the listed authors of Stochastic Parrots (they include Mitchell) published on the website of the DAIR Institute, that there are “real and present dangers” associated with harms arising from AI, but this is qualified by the “acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”

Perhaps, suggests Mitchell, we should aim for something akin to a “Manhattan Project of intense research” that would cover “AI’s abilities, limitations, trustworthiness, and interpretability, where the investigation and results are open to anyone.” A far from insensible suggestion, bar the fact that the original Manhattan Project, dedicated to creating the first atomic bomb during the Second World War, was itself a competition to ensure that Nazi Germany did not get there first.


Like what we do at The AIMN?

You’ll like it even more knowing that your donation will help us to keep up the good fight.

Chuck in a few bucks and see just how far it goes!

Your contribution to help with the running costs of this site will be gratefully accepted.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

Donate Button


Login here Register here
  1. andyfiftysix

    “Should we risk loss of control of our civilization.” Now there is a big assumption in all this. That we have any control now. Because we have the power to decide now on what our priorities are. Wow, imagine if a machine will teach us to have manners or singlemindedly put us in our place. HAHAHAHA, thats why we have the RBA, so the government doesnt have to make the decision. Wow, are we not full of our own shit.

  2. Clakka

    Those that dwell in fear will find something fearful in everything. And one can be certain that there will always be those that seek to exploit fear for every purpose imaginable.

  3. Steve Davis

    Andy said “Now there is a big assumption in all this. That we have any control now.”

    Excellent point. There are forces working right now to deprive us of control. A I is the least of our worries.

    Clakka said “Those that dwell in fear will find something fearful in everything.” And the economics of selfishness (liberalism) that has been foisted upon us since the 1980s has ensured that the numbers of those who dwell in fear, grows and grows.

    Once community bonds are broken, when support networks are weakened or eliminated, these are replaced with a narcissistic individualism that breeds distrust and fear. When we are our only or main support mechanism, we live in fear of missing out.

    To compound the problem, an industry exists that is so huge that its impact is possibly beyond calculation, an industry the sole purpose of which is to engender this narcissistic individualism, this fear of missing out, and convince us that we can overcome our fears through non-stop consumerism.

    That industry is, of course, advertising.

    We are already being controlled.

  4. Clakka

    Yes Steve Davis, agree. And also the division of politics and militarism (and their religious enablers) that would have us provide murderers and canon fodder.

  5. Steve Davis

    Absolutely Clakka, absolutely.

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 2 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here

Return to home page
%d bloggers like this: