Domestic violence disclosure schemes: part of the solution…

Monash University Media Release The spotlight is yet again shining on the national…

When Safety is a Fiction: Passing the UK’s…

What a stinking story of inhumanity. A country intent on sending asylum…

The Newsman

By James Moore “If I had my choice I would kill every reporter…

Not good enough

By Bert Hetebry What is the problem with men? As I sat down to…

University Investments: Divesting from the Military-Industrial Complex

The rage and protest against Israel’s campaign in Gaza, ongoing since the…

Australian dividend payouts to shareholders rise 6 times…

Oxfam Australia Media Release Australian dividend payments to shareholders from corporate investments grew…

The Wizard of Aus - a story for…

By Jane Salmon A Story About Young Refugee or Stateless Children Born Overseas Once…

Anzac and the Pageantry of Deception

On April 25, along Melbourne’s arterial Swanston Street, the military parade can…

«
»
Facebook

Rishi Sunak’s AI Pitch: The Bletchley Declaration

British Prime Minister Rishi Sunak looks as much a deep fake projection as a thin, superficial representation of reality. His robotic, risible awkwardness makes a previous occupant of his office, Theresa May, look soppily human in comparison. But at the AI Safety Summit at Bletchley Park, the nervous system of code breaking during the Second World War dominated by such troubled geniuses as Alan Turing, delegates had gathered to chat about the implications of Artificial Intelligence.

The guest list was characterised by a hot shot list of Big Tech and political panjandrums, part of an attempt by the UK to, as TechCrunch put it, “stake out a territory for itself on the AI map – both as a place to build AI businesses, but also as an authority in the overall field.” They included Google, Meta, Microsoft and Salesforce, but excluded Apple and Amazon. OpenAI and the perennially unpredictable Elon Musk, with his X AI, was present.

The guest list in terms of country representatives was also curious: no Nordic presence; no Russia (but Ukraine – naturally). Brazil, holding up the Latin American front; a few others doing the same for the Global South. US President Joe Biden was not present, but had sent his Vice President, Kamala Harris, as emissary. The administration had, only a few days prior, issued the first Executive Order on AI, boastfully claiming to establish “new standards for AI safety and security” while protecting privacy, advancing equity and civil rights, all alongside promoting the consumer and worker welfare, innovation and competition. Doubters will be busy.

China was invited to the event with the reluctance one affords an influential but undesirable guest. Accordingly, its delegates were given what could only be regarded as a confined berth. In that sense, the summit, as with virtually all tribal gatherings, had to find some menacing figure in the grand narrative of human striving. Humankind is important, but so are select, targeted prejudices. As UK Deputy Prime Minister Oliver Dowden stated with strained hospitality, “There are some sessions where we have like-minded countries working together, so it might not be appropriate for China to join.”

Sunak left it to the Minister for Technology, Michelle Donelan, to release the Bletchley Declaration, a document which claims to scrape and pull together some common ground about how the risks of AI are to be dealt with. Further meetings are also planned as part of an effort to make this gig regular: Korea will host in six months; France six months afterwards. But the British PM was adamant that hammering out a regulatory framework of rules and regulations at this point was premature: “Before you start mandating things and legislating for things… you need to know exactly what you’re legislating for.” Musk must have been overjoyed.

The declaration opens with the view that AI “presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, and prosperity.” With that rosy tinted view firmly in place, the statement goes on to state the goal: “To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in manner that is safe, in such a way as to be human-centric, trustworthy and responsible.”

Concerns are floated, including the potential abuse arising from the platforms centred on language systems being developed by Google, Meta and OpenAI. “Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models.”

Recognition had to also be had regarding “the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed.”

For the sake of form, the statement is partly streaked by concern for the “potential intentional misuse or unintended issues of control relating to alignment with human intent.” There was also “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

The declaration goes on to chirp about the virtues of civil society, though its creators and participants have done nothing to assure them that their role was that relevant. In a letter sent to Sunak, signed by over 100 UK and international organisations, human rights groups, trade union confederations, civil society organisations, and experts, the signatories protested about the fact that “the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to change the rules.”

It was revealing, given the theme of the conference, that “the communities and workers most affected by AI have been marginalised by the Summit.” To also talk about AI in futuristic terms misrepresented the pressing, current realities of technological threat. “For any millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.”

Individuals could have their jobs terminated by algorithm. Loan applicants could be disqualified on the basis of postcode or identity. Authoritarian regimes were using biometric surveillance while governments resorted to “discredited predictive policing.” And the big tech sector had “smothered” innovation, squeezing out small businesses and artists.

From within the summit itself, limiting China’s limited contribution may have revealing consequences. A number of Chinese academics attending the summit had signed on to a statement showing even greater concern for the “existential risk” posed by AI than either the Bletchley statement or President Biden’s executive order on AI. According to the Financial Times, the group, which is distinguished by such figures as the computer scientist Andrew Yao, are calling for the establishment of “an international regulatory body, the mandatory registration and auditing of advanced AI systems, the inclusion of instant ‘shutdown’ procedures and for developers to spend 30 per cent of their research budget on AI safety.”

Humankind has shown itself to be able, on rare occasions, to band together in creating international frameworks to combat a threat. Unfortunately, such structures – the United Nations being one notable example – can prove brittle and subject to manipulation. How the approach to AI maintains an “ethnic of use” alongside the political and economic prerogatives of governments and Big Tech is a question that will continue to trouble critics well-nourished by scepticism. Rules will no doubt be drafted, but by whom?

 

Like what we do at The AIMN?

You’ll like it even more knowing that your donation will help us to keep up the good fight.

Chuck in a few bucks and see just how far it goes!

Your contribution to help with the running costs of this site will be gratefully accepted.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

Donate Button

7 comments

Login here Register here
  1. frances

    Another brilliant piece by Mr Kampmark demonstrating the absurd ineptitude and greed of the masters of the universe when humanity is faced with an existential threat.

    The UK Guardian’s John Crace’s gobsmacking glossary of the ruthless patois adopted by #10 during the glory days of the Johnson Govt during covid hardly inspires confidence in a better future under the sycophantic Rishi Sunak.

    Depressing to think that all those demos and marches and consciousness-raising and political activism and environmentalism and momentary victories by progressives over the decades merely confirm a kind of Third Law of push and pushback between humanity’s primitive urges and capacity for self-restraint.

    Seems we’re flailing like insects in a zero sum game of our own making.

  2. New England Cocky

    The more things stay the same ….. I am reminded that these same concerns were raised about 50 years ago when the NSW DoE was ”looking at” the use of computers in education, especially schools.
    .
    Some DoE genius was sold the Wang computer having an alphabetic rather than the usual QWERTY keyboard because the DoE ”officer” charged with researching this matter lacked any personal keyboard skills and even less interest …. he was a mere 63 days away from retirement.
    .
    The group of forward thinking teachers invited to this workshop included about 50% persons having some level of programming skills, certainly a number had much more experience than the soon to retire officer.
    .
    The official ”feeling” was that ”computers were unlikely to make any great difference to the way education or schools or curricula were run, and this attitude continued for over a decade as the under-skilled DoE desk jockeys in Head Office protected their personal bailiwicks.
    .
    Given that only the names have changed to protect the guilty, and the corporate ethos is to avoid making changes at almost any cost, it appears likely that decisions made at this time about AI will exclude the education sector because the desk jockeys lack the necessary imagination to see the future uses just waiting to be developed.
    .
    Why, we may even see classroom teachers replaced by a computer robot playing computer games that teach necessary subjects ….. or worse, the elimination of desk jockeys from Head Office by computers doing all the menial paper shuffling jobs.

  3. New England Cocky

    @K: Credit where credit is due.

    I am reminded of the development by Australian scientists of the discovery and development of commercial penicillin production that American Big Pharma then manufactured during WWII to save many Allied personnel from dying of infection of their wounds.

  4. corvusboreus

    K,
    Thank you for providing that link.
    Completely new info to me.

  5. GL

    Rishi Sunak: The first attempt at an AI wooden boy that Geppetto and the fairy don’t talk about.

  6. Clakka

    Dr Binoy, thanks for the article,

    Did you use AI for any of it, I note esp the potential typo “How the approach to AI maintains an “ethnic of use” ….” in the last para. Just kidding really.

    For me, I like the www, just the way it is (and social media – well, time will tell). But, as for AI becoming ubiquitous and perchance a replacement for the free access to detailed and disparate info available on the www – absolutely no way!

    I see AI as an ‘orwellian’ attempt at smoothing alternative views and condensing them into some form of puff or fairy floss governed by algorithms of the putative and ever-aspiring masters of the digital information universe, with its ever growing commercial & psychological importance. And of course, the incumbent political narcissists and despots will seek to have their finger in that pie.

    It is risible, yet no small irony, that very recently in Oz, academics reporting to govt have been caught out seriously misrepresenting matters because of reliance upon AI:

    https://www.theguardian.com/business/2023/nov/02/australian-academics-apologise-for-false-ai-generated-allegations-against-big-four-consultancy-firms

    PS: thanks K for the ‘Enigma’ link. Just shows how we can, in complacency, be blindsided by BS.

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 2 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here

Return to home page