Chapter 6 A model of fair and explainable artificial intelligence
Restricted access

Artificial intelligence (AI) is making more and more algorithmic decisions for humans. However, the "intelligence" function in AI relies heavily on learning technologies, which suffer two major flaws leading to legal and technical challenges: potentially discriminative/biased decisions, and unable to explain why and how a machine makes such decisions. Therefore, building a fair and explainable AI model is important and urgent. This article presents a novel theory-based individual-level dynamic learning method that performs learning using data of an individual subject without employing others' information, and identifies causal mechanism from unobserved data generating process that each subject exhibits. Thus, data selection bias is avoided and a fair and interpretable decision is achieved. We empirically test our method using a real-world dataset on risk assessment for lending decisions. Our results show that the proposed method outperforms conventional learning methods in terms of fairness in treating data subjects, decision accuracy and interpretability.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with you Elgar account