Labor Hegemony Under Threat? Perspectives on the By-Election…

By Denis Bright   The tidal wave swing against Labor in the Ipswich West…

Predictable Outcomes: Australia, the National Security Committee, and…

Archivists can be a dull if industrious lot. Christmas crackers are less…

Dutton's bid for nuclear power: hoax or reckless…

It’s incredible. Such is our love-in with Peter “Junkyard” Dutton, our former…

No wind power, no solar farms. Let’s go…

By Bert Hetebry   Holidaying down at Busselton in the last week, enjoying time…

Racing the Sun

By James Moore   “If you want to know the secrets of existence, do…

Israel government continues to block aid response despite…

Oxfam Australia Media Release   International community resorts to sea routes and air drops…

Siding with Spotify: The European Commission Fines Apple

It will come as little surprise that colossal Apple has been favouring…

Plan to dump eight toxic oil platforms off…

Friends of the Earth Media Release Threat from mercury, lead & radioactive waste…

«
»
Facebook

War Mongering for Artificial Intelligence

The ghost of Edward Teller must have been doing the rounds between members of the National Commission on Artificial Intelligence. The father of the hydrogen bomb was never one too bothered by the ethical niggles that came with inventing murderous technology. It was not, for instance, “the scientist’s job to determine whether a hydrogen bomb should be constructed, whether it should be used, or how it should be used.” Responsibility, however exercised, rested with the American people and their elected officials.

The application of AI in military systems has plagued the ethicist but excited certain leaders and inventors. Russian President Vladimir Putin has grandiloquently asserted that “it would be impossible to secure the future of our civilization” without a mastery of artificial intelligence, genetics, unmanned weapons systems and hypersonic weapons.

Campaigners against the use of autonomous weapons systems in war have been growing in number. The UN Secretary-General António Guterres is one of them. “Autonomous machines with the power and discretion to select targets and take lives without human involvement,” he wrote on Twitter in March 2019, “are politically unacceptable, morally repugnant and should be prohibited by international law.” The International Committee for Robot Arms Control, the Campaign to Stop Killer Robots and Human Rights Watch are also dedicated to banning lethal autonomous weapons systems. Weapons analysts such as Zachary Kallenborn see that absolute position as untenable, preferring a more modest ban on “the highest-risk weapons: drone swarms and autonomous chemical, biological, radiological, and nuclear weapons.”

The critics of such weapons systems were far away in the Commission’s draft report for Congress. The document has more than a touch of the mad scientist in the bloody service of a master. This stood to reason, given its chairman was Eric Schmidt, technical advisor to Alphabet Inc., parent company of Google, which he was formerly CEO of. With Schmidt holding the reins, we would be guaranteed a show shorn of moral restraint. “The AI promise – that a machine can perceive, decide, and act more quickly, in a more complex environment, with more accuracy than a human – represents a competitive advantage in any field. It will be employed for military ends, by governments and non-state groups.”

In his testimony before the Senate Armed Services Committee on February 23, Schmidt was all about “fundamentals” in keeping the US ascendant. This involved preserving national competitiveness and shaping the military with those fundamentals in mind. But to do so required keeping the eyes of the security establishment wide open for any dangerous competitor. (Schmidt understands Congress well enough to know that spikes in funding and outlays tend to be attached to the promotion of threats.) He sees “the threat of Chinese leadership in key technology areas” as “a national crisis.” In terms of AI, “only the United States and China” had the necessary “resources, commercial might, talent pool, and innovation ecosystem to lead the world.” Within the next decade, Beijing could even “surpass the United States as the world’s AI superpower.”

The testimony is generously spiked with the China threat thesis. “Never before in my lifetime,” he claimed, “have I been more worried that we will soon be displaced by a rival or more aware of what second place means for our economy, our security, and the future of our nation.” He feared that such worries were not being shared by officials, with the DoD treating “software as a low priority.” Here, he could give advice on lessons learned in the spawning enterprises of Silicon Valley, where the principled live short lives. Those dedicated to defence could “form smart teams, drive hard deliverables, and move quickly.” Missiles, he argued, should be built “the way we now build cars: use a design studio to develop and simulate in software.”

This all meant necessarily praising a less repressible form of AI to the heavens, notably in its military applications. Two days of public discussion saw the panel’s vice chairman Robert Work extol the virtues of AI in battle. “It is a moral imperative to at least pursue this hypothesis” claiming that “autonomous weapons will not be indiscriminate unless we design them that way.” The devil is in the human, as it has always been.

In a manner reminiscent of the debates about sharing atomic technology in the aftermath of the Second World War, the Committee urges that the US “pursue a comprehensive strategy in close coordination with our allies and partners for artificial intelligence (AI) innovation and adoption that promotes values critical to free and open societies.” A proposed Emerging Technology Coalition of likeminded powers and partners would focus on the role of “emerging technologies according to democratic norms and values” and “coordinate policies to counter the malign use of these technologies by authoritarian regimes.” Fast forgotten is the fact that distinctions such as authoritarianism and democracy have little meaning at the end of a weapon.

Internal changes are also suggested to ruffle a few feathers. The US State Department comes in for special mention as needing reforms. “There is currently no clear lead for emerging technology policy or diplomacy within the State Department, which hinders the Department’s ability to make strategic technology decisions.” Allies and partners were confused when approaching the State Department as to “which senior official would be their primary point of contact” for a range of topics, be they AI, quantum computing, 5G, biotechnology or new emerging technologies.

Overall, the US government comes in for a battering, reproached for operating “at human speed not machine speed.” It was lagging relative to commercial development of AI. It suffered from “technical deficits that range from digital workforce shortages to inadequate acquisition policies, insufficient network architecture, and weak data practices.”

The official Pentagon policy, as it stands, is that autonomous and semi-autonomous weapons systems should be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” In October 2019, the Department of Defence adopted various ethical principles regarding the military use of AI, making the DoD Artificial Intelligence Centre the focal point. These include the provision that, “DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.” The “traceable” principle is also shot through with the principle of human control, with personnel needing to “possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities.”

The National Commission pays lip service to such protocols, acknowledging that operators, organisations and “the American people” would not support AI machines not “designed with predictability” and “clear principles” in mind. But the note of warning in not being too morally shackled becomes a screech. Risk was “inescapable” and not using AI “to solve real national security challenges risks putting the United States at a disadvantage.” Especially when it comes to China.

 

Like what we do at The AIMN?

You’ll like it even more knowing that your donation will help us to keep up the good fight.

Chuck in a few bucks and see just how far it goes!

Your contribution to help with the running costs of this site will be gratefully accepted.

You can donate through PayPal or credit card via the button below, or donate via bank transfer: BSB: 062500; A/c no: 10495969

Donate Button

5 comments

Login here Register here
  1. Matters Not

    Re:

    Responsibility, however exercised, rested with the American people and their elected officials.

    Really? Seems to me the notion of responsibility resting with the people lies at the heart of what’s wrong with our (supposed) democracy. In Australia (and elsewhere) crucial, life-changing decision-making, such as declarations of war, broadly defined, are rarely (if ever) referred to the citizens. Worse, in most cases they are not even referred to their elected representatives.

    And it’s not just aggression abroad. There’s so many other areas of potential (and real) policy initiatives that could’ve been referred to the electorate as a whole that would have far different outcomes. Take the submarines, the planes, the armored vehicles etc etc. How many of those initiatives would have survived a popular vote? What about action on climate change? etc.

    While we have the technology to facilitate participatory democracy, it simply doesn’t happen. Guess the electorate isn’t up to the important decisions so we will just live a lie.

  2. Williambtm

    To simplify the translation of the article content prattled about in the above, the USA will continue their slaughter of people across the World. Why? Perhaps referring to all the dead people as merely collateral casualties, disturbs the stigma that it is better recognized as outright slaughter.

    The USA will continue to kill without care nor compunction.
    Realistically, the USA Administration and its Pentagon Generals should be recognized as guilty to the commission of war crimes on the many occasions in the many countries they have invaded to establish democracy. Which is plain bullschitt.

  3. Brozza

    I guess the worlds #1 murderous terrorist state will hardly baulk at the opportunity to rapidly add to the 12M it’s already killed since WW2.

  4. Phil Pryor

    Teller, you say, was not too demanding on the moral restraints implied in his position, but, having leapfrogged the Oppenheimer era and team, we might reflect on Oppy’s fillng out a security form.., “Do you drink alcohol.., four boxes, never; occasionally, regularly; to excess”. Oppy ticked two, the second and fourth.., showing inner churns and doubts. To leave that work to people like Harry Truman to use or not use, Harry the bankrupt barely literate haberdashery manager and machine politician of the most superficial type.., well, it shows one of thousands of possible examples of bizarre futiiity in our commenting. The comments here rightly warn us of the worn and empty partial processes, easily avoided by schemers, theoretically available to us but actually absent. We need fast, electronic, citizen voting on well argued positions, and will likely never get this. Careerist, bribegutzing, fistbonkers in office do not like our intrusions.

  5. Denis Bright in Brisbane

    Raises vital issues for consideration by Australians as part of the US Global Alliance with its largely bipartisan support base that would welcome new investment in lethal weapons here as a post-COVID economy recovery strategy. Thanks for your research, Dr. Binoy Kampmark.

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 2 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here

Return to home page