How Safe Are Your Algorithms?

John W Lewis's picture
Chat Date: 
Thu, Jun 29, 2017

In a Turing Lecture earlier this year, Ben Shneiderman described the problems cause by unsafe algorithms, and proposed a solution. Let's discuss this issue and whether this solution makes sense in the context of how safety issues are handled.The tile of the lecture is:

"Algorithmic Accountability: Designing for safety through human-centered independent oversight",

and is described here, by The Alan Turing Institute.

In the video, the main part of the lecture runs for about 40 minutes from https://www.youtube.com/watch?v=yxTXgzsAMIE&t=28m25s

The proposal is for the foundation of organizations to provide safety oversight of processes which use algorthms. In the US, this would take the form of a National Algorithm Safety Board which is equivalent to the National Transportation Safety Board, for transport, and of other safety oversight bodies in other fields.

A clear distinction is made between safety boards and regulators, which perform different functions.

Questions

Let's discuss this topic during #innochat on June 29, 2017, starting at 12 noon Eastern time, based on the following questions:

  1. How do systems which use algorthms differ from other systems?
  2. What are the characteristics of algorithms that require special attention?
  3. How do algorithms fail? I.e. when failure occurs, what happens?
  4. How do we handle safety in other disciplines and where does algorithmic safety fit in that regime?
  5. Is a safety board the right approach?

 

Downloads

All donations go to help defray hosting, maintenance and domain name costs. Thanks!