On Wednesday, the United States House Committee on Science, Space and Technology, heard evidence regarding the societal and ethical implications of Artificial Intelligence. The committee has jurisdiction over non-defence federal scientific research and development in the US and heard from the following witnesses:

We’ve summarised some of the key themes that emerged from the lively and thought-provoking discussion.

When asked how AI differs from other transformative technologies the panellists highlighted that, relative to other emerging technologies such as 5G, AI is being deployed at an incomparably quicker speed and larger scale. The panellists noted that AI is being developed by a small group of individuals but subsequently deployed to impact the lives of a huge proportion of the global population. All the while the data and infrastructure required to deploy the technology at scale are predominately held by corporate entities, and, as such, have been optimised to maximise revenue and profit. The question that naturally follows this discussion is: does society need to have appropriate checks to ensure that AI does not override the public good?

Dr. Tourassi added that AI is not simply about computers or technology but rather it is fundamentally linked to how it is applied by humans. Tourassi highlighted that the federally-coordinated AI strategy in the health field has been beneficial in a sector where adequate IT infrastructure and privacy is paramount, but insisted that more work was required to define data ownership and unblur the lines between data used for research and commercial purposes.

The participants emphasised several examples of AI applications that persistently amplify structural biases. Buolamwini, for instance, highlighted a report that demonstrates that the AI on automated vehicles are less capable of detecting children compared to adults, leading her to question which lives matter? Further, AI is being used to justify the actions of public and private deployers even in situations where there is little scientific evidence to suggest that correlations exist. Criticism was levelled at industry and academic communities for negligence towards evaluating and documenting the limitations of the AI that they are developing and deploying, and, further, that the interrogation of these systems for research purposes was too difficult at present.

It was appreciated that there are technical challenges related to AI that can be addressed, such as the robustness and resilience of an algorithmic system. However, fundamentally, AI is automating tasks that require value judgements. This then poses several questions including for what and for who are the tasks being automated? And, if the AI systems are deemed to be effective, for what and for whom are they deemed to be effective?

The witnesses presented several suggestions for US lawmakers to increase transparency and accountability with regard to mitigating the ethical and societal risks that AI systems may pose, including:

  • Comprehensive use of AI impact assessments and data labelling to improve transparency
  • Public disclosure of AI use to consumers
  • Waiving of trade secrecy rules when it hinders public oversight and accountability
  • Adequate protection for AI whistle-blowers
  • Government investment in measurement, assessment and benchmarking to inform when AI is benefitting society and when it is failing groups or individuals
  • Federally conducted multi-disciplinary conversation to address domain specific use of AI
  • Federally-funded AI research for projects that are in the public interest

You can watch the session in full here, and follow the links to read the written testimonies of Meredith Whittaker, Jack Clark, Joy Buolamwini, Georgia Tourassi.