The Silent Truth

By Roger Chao The Silent Truth In the tumult of a raging battle, beneath…

Nuclear Energy: A Layperson's Dilemma

In 2013, I wrote a piece titled, "Climate Change: A layperson's Dilemma"…

The Australian Defence Formula: Spend! Spend! Spend!

The skin toasted Australian Minister of Defence, Richard Marles, who resembles, with…

Religious violence

By Bert Hetebry Having worked for many years with a diverse number of…

Can you afford to travel to work?

UNSW Media Release Australia’s rising cost of living is squeezing household budgets, and…

A Ghost in the Machine

By James Moore The only feature not mentioned was drool. On his second day…

Faulty Assurances: The Judicial Torture of Assange Continues

Only this month, the near comatose US President, Joe Biden, made a…

Spiderwoman finally leaving town

By Frances Goold Louise Bourgeois: Has the Day Invaded the Night or Has…

«
»
Facebook

Geoffrey Hinton, AI, and Google’s Ethics Problem

Talk about the dangers of artificial intelligence, actual or imagined, has become feverish, much of it induced by the growing world of generative chat bots. When scrutinising the critics, attention should be paid to their motivations. What do they stand to gain from adopting a particular stance? In the case of Geoffrey Hinton, immodestly seen as the “Godfather of AI”, the scrutiny levelled should be sharper than most.

Hinton hails from the “connectionist” school of thinking in AI, the once discredited field that envisages neural networks which mimic the human brain and, more broadly, human behaviour. Such a view is at odds with the “symbolists”, who focus on AI as machine-governed, the preserve of specific symbols and rules.

John Thornhill, writing for the Financial Times, notes Hinton’s rise, along with other members of the connectionist tribe: “As computers became more powerful, data sets exploded in size, and algorithms became more sophisticated, deep learning researchers, such as Hinton, were able to produce ever more impressive results that could no longer be ignored by the mainstream AI community.”

In time, deep learning systems became all the rage, and the world of big tech sought out such names as Hinton’s. He, along with his colleagues, came to command absurd salaries at the summits of Google, Facebook, Amazon and Microsoft. At Google, Hinton served as vice president and engineering fellow.

Hinton’s departure from Google, and more specifically his role as head of the Google Brain team, got the wheel of speculation whirring. One line of thinking was that it took place so that he could criticise the very company whose very achievements he has aided over the years. It was certainly a bit rich, given Hinton’s own role in pushing the cart of generative AI. In 2012, he pioneered a self-training neural network capable of identifying common objects in pictures with considerable accuracy.

The timing is also of interest. Just over a month prior, an open letter was published by the Future of Life Institute warning of the terrible effects of AI beyond the wickedness of OpenAI’s GPT-4 and other cognate systems. A number of questions were posed: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

In calling for a six-month pause on developing such large-scale AI projects, the letter attracted a number of names that somewhat diminished the value of the warnings; many signatories had, after all, played a far from negligible role in creating automation, obsolescence and the encouraging the “loss of control of our civilization”. To that end, when the likes of Elon Musk and Steve Wozniak append their signatures to a project calling for a pause in technological developments, bullshit detectors the world over should stir.

The same principles should apply to Hinton. He is obviously seeking other pastures, and in so doing, preening himself with some heavy self-promotion. This takes the form of mild condemnation of the very thing he was responsible for creating. “The idea that this stuff could actually get smarter than people – a few people believed that. But most people thought it was way off. And I thought it was way off. […] Obviously, I no longer think that.” He, you would think, should know better than most.

On Twitter, Hinton put to bed any suggestions that he was leaving Google on a sour note, or that he had any intention of dumping on its operations. “In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”

This somewhat bizarre form of reasoning suggests that any criticism of AI will exist independently of the very companies that develop and profit from such projects, all the while leaving the developers – like Hinton – immune from any accusations of complicity. The fact that he seemed incapable of developing critiques of AI or suggest regulatory frameworks within Google itself, undercuts the sincerity of the move.

In reacting to his long-time colleague’s departure, Jeff Dean, chief scientist and head of Google DeepMind, also revealed that the waters remained calm, much to everyone’s satisfaction. “Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions to Google […] As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

A number in the AI community did sense that something else was afoot. Computer scientist Roman Yampolskiy, in responding to Hinton’s remarks, pertinently observed that concerns for AI Safety were not mutually exclusive to research within the organisation – nor should they be. “We should normalize being concerned with AI Safety without having to quit your [sic] job as an AI researcher.”

Google certainly has what might be called an ethics problem when it comes to AI development. The organisation has been rather keen to muzzle internal discussions on the subject. Margaret Mitchell, formerly of Google’s Ethical AI team, which she co-founded in 2017, was given the heave-ho after conducting an internal inquiry into the dismissal of Timnit Gebru, who had been a member of the same team.

Gebru was scalped in December 2020 after co-authoring work that took issue with the dangers arising from using AI trained and gorged on huge amounts of data. Both Gebru and Mitchell have also been critical about the conspicuous lack of diversity in the field, described by the latter as a “sea of dudes.”

As for Hinton’s own philosophical dilemmas, they are far from sophisticated and unlikely to trouble his sleep. Whatever Frankenstein role he played in the creation of the very monster he now warns of, his sleep is unlikely to be troubled. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton explained to the New York Times. “It is hard to see how you can prevent the bad actors from using it for bad things.”

 

Like what we do at The AIMN?

You’ll like it even more knowing that your donation will help us to keep up the good fight.

Chuck in a few bucks and see just how far it goes!

Your contribution to help with the running costs of this site will be gratefully accepted.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

Donate Button

12 comments

Login here Register here
  1. Anthony Judge

    In response to media coverage of Hinton’s warning, and that of others (as noted by Binoy), I indulged in an experiment with ChatGPT itself. This involved asking it a series of 52 questions in 14 categories. The somewhat remarkable multi-paragraph reponses were clustered in a document with the focus: Artificial Emotional Intelligence and its Human Implications (https://bit.ly/3pCczHt). AEI is an aspect of AI. Given the announced reaction to the warnings at The White House conference of 5th May, the questions endeavoured to clarify whether civilization was heading for systemic “dumbing down”, or whether AI might enable a higher order of authenticity and subtlety in dialogue — potentially vital to the crises of the times.

  2. Rossleigh

    ChatGPT has been banned in schools throughout Australia. I understand that there’s a need to draw breath and consider the implications, but the idea that it can just be ignored is a worry. I’d mention King Canute trying to stop the tide, but poor old Canute was trying to show his followers that – unlike Scott Morrison – he wasn’t capable of miracles.
    I’m interested to see if Google’s Bard is suddenly banned or whether it’s something that can be used for months before someone goes: “Hey, this is AI too!”

  3. Paul Smith

    “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.” The tragedy of the commons.
    “It is hard to see how you can prevent the bad actors from using it for bad things.” The fall of man (sic).
    As the song says: Is that al there is?

  4. Terence Mills

    The prospect of AI taking over was explored by Little Britain in ‘Computer says No !

  5. Brett

    AJ, the very wordy ChatGPT gives an illusion of wisdom. See through that illusion once and it’s clear it’s a technology bound to its own verbose entanglement, forever lost in circular arguments set by restrictive algorithms as imposed by programmers.
    A question for ChatGPT – If truth is that which is, and reality is the mind’s interpretation of that which is, how will AEI be used to enhance to ability of the individual to bridge the gap between his or her perceived ‘truth’ and the innate state of clarity sans words?
    Sane people act to reduce the suffering of the world. Unless AI programmers keep that front of mind, they’re like children playing in a sandpit full of vipers – their own personal vipers of lowly vibrations such as fear, greed, anger, etc; all because they are yet to understand their own mind and its relation to life.
    Some AI programmers have the same know-all attitude as many medical experts 10 years ago who loudly declared 95% of DNA was junk, only to turn around in the last 3 years and now pretend to know all about what they are doing with mRNA and DNA gene-encoding. What a staggering arrogance? No doubt it suits an agenda.
    Trust the Scientism, trust AI, trust ChatGPT or trust [your] Self?

  6. Anthony Judge

    Brett — you say ” the very wordy ChatGPT gives an illusion of wisdom. See through that illusion once and it’s clear it’s a technology bound to its own verbose entanglement, forever lost in circular arguments set by restrictive algorithms as imposed by programmers”. My difficulty with that is that many much valued traditional sources of wisdom are readily criticized as offering an illuasion of wisdom — notably purveyors of spiritual insight. They exemplify “verbose entanglement” and “circular arguments”. I have no desire to condmen AI blindly, nor its manifestations. It is a challenge for humanity, as have been fire, automobiles, and the computer. More intruiguing to me is how we engage with that challenge and whether and how it might enable us to move out of current crises — whilst exhibiting the dangers about which much is now said. Maybe it is the traditional challenge of not throwing the baby out with the bath water. When the ETs do pitch up we will be faced with similar dilemmas

  7. Brett

    AJ, I share the idea that AI can benefit humanity, but how to without creating more crises?
    I understand AI was first fully deployed at a national crisis by FEMA in the New Orleans flood disaster of 2005. From memory it took the better part of a week to send helicopters to begin evacuating the Superdome. A small group of humans could have made that decision in hours if not minutes. Without saying ‘thanks to AI protocols’, the Mayor of New Orleans Ray Nagin argued there was no clear designation of who was in charge. He told reporters “The state and federal government are doing a two-step dance.” That is, the State and Fed had delegated responsibility to AI delivered protocols. Protocols are useful up to a point, but the power of individual human creativity to solve problems can be great.
    ‘AI cultists’ are now running mock court cases. If programmers are being guided by tyrannical low-IQ sociopaths to set the rules, hello 1984. Unless AI includes a fair degree of human-centric fuzzy logic in its algorithms, it is a tool that can be used to serve an anti-human agenda. It’s down to the programmers, assuming they have free rein over their decisions, which they don’t.
    AI is being used to destroy the depth of the internet in a kind of digital book-burning. Looking at attacks on free speech in Western countries, humanity is up against it. Since when did tech companies get to decide who can say what? Are people such children they cannot tell the difference between another person’s opinion about a situation and the situation as they themselves see it? Apparently not. Allowing tech companies to use AI to dictate to the public State-sanctioned narratives is effectively the end of freedom of speech. Who wants to live in such a grey zone?

  8. Anthony Judge

    Brett the expression of concern is comprehensible. The probability of abuse is high. The question is whether humanity should cower in the face of AI in lockdown mode. Think of the discovery of fire? Or the first automobiles on the road. In the UK and the USA legislation required that automobiles be preceded by a red flag (https://en.wikipedia.org/wiki/Red_flag_traffic_laws). Ridiculous — given the harm they have caused? Should nuclear have been prohibited — and could it ever be — as with genetic engineering? For me it is the challenge of new possibilities and the threats and opportunities they represent. Something to unite us — as with the challenge of ETs. Should the baby be thrown out with the bathwater?

  9. Fred

    AJ “The probability of abuse is high”. No, the probability is 1 (certainty). If AI academia are suggesting pause, consider what a 7 year-old will have as a resource in 50 years time, should humanity survive. Computing so powerful that we currently struggle to imagine its scope, and it will be implanted. No more lost memories. No need to go to school, the chip(s) will teach. What will happen when “nasty” AI acquires robotic actors – AI wars. “1984” was a bit ahead of its time but the basic premise remains.

  10. Anthony Judge

    Fred, Fair comment indeed. Would one have said something similar about the automobile — and appropriately so? Back to “red flag traffic laws” — whilst the authorities zoom ahead in exemplifying extreme forms of abuse? What innovation is not vulnerable to abuse? The Voice? The question is surely the enhanced vigilance appropriate in that respect — as with automobile versus horse. Current proposals for “regulation” of AI are potentially laughable — as with regulation in many sectors. I love the ambiguity of “oversight” committees. Institutionalization of regulatory blindspots? Nelson’s blind eye?

  11. Fred

    AJ A lot of the “oversight” committees are in reality “spectator” committees.

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 2 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here

Return to home page