Check out all the on-demand sessions from the Intelligent Security Summit here.

AI might be booming, but a new brief from The Association for Computing Machinery (ACM)’s global Technology Policy Council, which publishes tomorrow, notes that the ubiquity of algorithmic systems “creates serious risks that are not being adequately addressed.” 

According to the ACM brief, which the organization says is the first in a series on systems and trust, perfectly safe algorithmic systems are not possible. However, achievable steps can be taken to make them safer and should be a high research and policy priority of governments and all stakeholders.

The brief’s key conclusions:

  • To promote safer algorithmic systems, research is needed on both human-centered and technical software development methods, improved testing, audit trails, and monitoring mechanisms, as well as training and governance.
  • Building organizational safety cultures requires management leadership, focus in hiring and training, adoption of safety-related practices, and continuous attention.
  • Internal and independent human-centered oversight mechanisms, both within government and organizations, are necessary to promote safer algorithmic systems.

AI systems need safeguards and rigorous review

Computer scientist Ben Shneiderman, Professor Emeritus at the University of Maryland and author of Human-Centered AI, was the lead author on the brief, which is the latest in a series of short technical bul...