Artificial intelligence developers, the professional associations that represent them, and academic institutions participate in the development and implementation of norms for artificial intelligence governance. Developers of artificial intelligence applications are in a unique position to make technical decisions that have broader impacts. Professional associations in which AI developers, academics and companies participate have recognized the need for the governance of artificial intelligence and have responded by developing ethical guidelines for AI and by participating in the development of other forms of governance at the international level. The university, part of whose mission is to engage in basic research, has also responded by establishing centers for the study of AI governance and is itself responsible for AI norms that have had some influence. Individual academics engage in international collaborations and provide their expertise to international governing bodies. At the same time, these actors are often in close relationships with private firms, often as employees, consultants, or recipients of funding. Finally, the norm of openness and the practice of developing technical tools detect and mitigate harms might not stem solely from these actors, but they have been championed by them. Openness seems to be well established through cross-border research collaborations, even in the face of national security concerns and by the adoption of open source practices at the international level.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with you Elgar account