While the significant and positive impact of Artificial Intelligence (AI) on business and society at large is well known, less attention is paid to the potential for unethical applications or outcomes of the new technology. A framework developed by Oxford University researchers offer an action plan for ensuring the ethical application of AI.
While the technological capabilities and impact of artificial intelligence (AI) has brought significant change to multiple facets of business and even society, the core of AI is still machines, not humans. And while these machines can learn, they cannot discern right from wrong—unless we deliberately step in to add an ethical dimension to AI. Where to start?
In a report co-sponsored by the Oxford Future of Marketing Initiative and the International Chamber of Commerce, a team of researchers from Oxford University’s Saïd Business School review and analyse the academic research in AI ethics, as well as ethical AI-related business statements and governmental and intra-government documents, to develop a framework for maintaining ethical boundaries in the use of artificial intelligence.
The framework’s first step is to develop a hierarchical set of principles—hierarchical in the sense that major, overriding principles are broken down into smaller principles.
The two fundamental principles of ethical AI are responsibility, which refers to the processes supported or driven by AI, and accountability, which refers to the outcomes of AI-related activities and operations.
Ensuring accountability begins with proactive leadership, and also includes reporting, contesting, correcting, and liability.
Responsibility is built on a more complex set of components, starting with three key principles: human-centric, fair, and harmless. Human-centric is concerned with the rights and self-determination of individuals, as well as the domains that benefit humans, such as sustainability. Thus, human-centric processes are processes that are transparent, intelligible and sustainable, as well as beneficial. The principle of fairness is achieved though processes that are just, inclusive, and non-discriminatory. Finally, harmless systems are safe, robust, and private.
Using the parameters just described as a guide, the next step in ensuring the ethics of AI applications and use in an organization is to identify where the risks of unethical AI can occur. The first risk ‘bucket’ is data. For example, the selection of data may be discriminatory or invade the privacy of individuals. The second risk bucket involves algorithms—the set of instructions at the heart of AI that might be influenced by the biases of those developing the algorithms. The final risk bucket is business use, which covers business goals—i.e., AI is used to achieve unethical business goals—and deployment—i.e., users can subvert the original ethical intention of AI towards unethical activities, including activities with adverse societal consequences.
With principles and risks identified, an organization can now take practical steps to ensure the ethical application of AI. The first step is a statement of intent, similar to a mission or vision statement that proclaims the organization’s commitment to ethical AI values, policies and practices. The second step is to implement an ethical AI plan for the organization that would include:
As new risks are discovered or emerge, the original application plans can be updated.
Borrowing an analogy from its developers, this framework and action plan for applying AI ethically offers both a ‘flight plan’—consisting of the ethical AI statement of intent—and a ‘flight checklist’ for each application of AI in an organization. The checklist allows the organization to monitor and manage the sources of potential ethical issues in its data, algorithms and AI business use, ensuring that AI in the organization leads to outcomes that are human centric, fair and harmless.
It’s important to note the dynamic nature of the framework. The vigilant monitoring for potential ethical issues in its applications of AI not only allows the organization to identify problem areas, but also to then put in safeguards and preventive measures, thus increasing the robustness of its commitment to ensure responsible and accountable processes.
Ethics for AI in Business. Felipe Thomaz, Natalia Efremova, Francesca Mazzi, Greg Clark, Ewan MacDonald, Rhonda Hadi, Jason Bell & Andrew Stephen. Oxford Future of Marketing Initiative report (July 2021).
Ideas for Leaders is a free-to-access site. If you enjoy our content and find it valuable, please consider subscribing to our Developing Leaders Quarterly publication, this presents academic, business and consultant perspectives on leadership issues in a beautifully produced, small volume delivered to your desk four times a year.
For the less than the price of a coffee a week you can read over 650 summaries of research that cost universities over $1 billion to produce.
Use our Ideas to:
Speak to us on how else you can leverage this content to benefit your organization. email@example.com