[CODATA-international] Zero Trust AI Governance

Suchith Anand Suchith.Anand at nottingham.ac.uk
Mon Aug 21 03:53:53 EDT 2023


Dear colleagues



Leading civil society organizations Accountable Tech, AI Now Institute, and EPIC jointly released a new “Zero Trust AI Governance” framework, which offers policymakers a robust and enforceable roadmap for addressing the urgent societal risks posed by these technologies.


Rapid advances in AI, the frenzied deployment of new systems, and the surrounding hype cycle have generated a swell of excitement about AI’s potential to transform society for the better. But we are not on course to realize those rosy visions. AI’s trajectory is being dictated by a toxic arms race amongst a handful of unaccountable Big Tech companies – surveillance giants who serve as the modern gatekeepers of information, communications, and commerce.


The societal costs of this corporate battle for AI supremacy are already stacking up as companies rush unsafe systems to market – like chatbots prone to confidently spew falsehoods – recklessly integrating them into flagship products and services.


Near-term harms include turbocharging election manipulation and scams, exacerbating bias and discrimination, eroding privacy and autonomy, and many more. And additional systemic threats loom in the medium and longer terms, like steep environmental costs, large-scale workforce disruptions, and further consolidation of power by Big Tech across the digital economy.


Industry leaders have gone even further, warning of the threat of extinction as they publicly echo calls for much-needed regulation – all while privately lobbying against meaningful accountability measures and continuing to release increasingly powerful new AI systems. Given the monumental stakes, blind trust in their benevolence is not an option.


Indeed, a closer examination of the regulatory approaches they’ve embraced – namely ones that forestall action with lengthy processes, hinge on overly complex and hard-to-enforce regimes, and foist the burden of accountability onto those who have already suffered harm – informed the three overarching principles of this Zero Trust AI Governance framework:

  1.  Time is of the essence – start by vigorously enforcing existing laws.
  2.  Bold, easily administrable, bright-line rules are necessary.
  3.  At each phase of the AI system lifecycle, the burden should be on companies to prove their systems are not harmful.


Absent swift federal action to alter the current dynamics – by vigorously enforcing laws on the books, finally passing strong federal privacy legislation and antitrust reforms, and enacting robust new AI accountability measures – the scope and severity of harms will only intensify.

If we want the future of AI to protect civil rights, advance democratic ideals, and improve people’s lives, we must fundamentally change the incentive structure.


More details  at https://ainowinstitute.org/publication/zero-trust-ai-governance


Best wishes


Suchith



Dr Suchith Anand

Senior Adviser to Governments and International Organisations | Scientist | AI Ethics | AI Governance | Policy | Consultant in Data and AI Ethics | Global Citizen | SDG Volunteer and Advocate

https://council.science/profile/suchith-anand/

https://www.rd-alliance.org/users/suchith-anand

https://ethicaldatainitiative.org<https://ethicaldatainitiative.org/>

This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.codata.org/pipermail/codata-international_lists.codata.org/attachments/20230821/3bdd997e/attachment.html>


More information about the CODATA-international mailing list