Intelligence Applies Ethics to Artificial Intelligence
Transparency and integrity are two key principles headlining the list of artificial intelligence (AI) ethics released by the Office of the Director of National Intelligence (ODNI). This list, released in combination with an AI ethics framework for the community, represents the first strike by the ODNI to apply guidance to the development and use of AI in the intelligence community.
The Principles of Artificial Intelligence Ethics for the Intelligence Community is brief and lists six core principles. In addition to respecting the law and acting with integrity while being transparent and accountable, AI use is to be objective and equitable; secure and resistant; informed by science and technology; and have human-centered development and use. These principles will guide AI use and development activities with the private sector, the public, the legal system and the intelligence community at large, according to the publication.
These principles buttress the Artificial Intelligence Ethics Framework for the Intelligence Community, which aims to “prevent unethical outcomes,” according to the framework. The framework is a self-described “living document,” and it calls for periodical reviews of how AI is being used to avoid “any undesired biases or unintended outcomes.”
One noteworthy aspect of the framework is that it poses many of the challenges to be addressed as questions. These would be answered by those procuring, incorporating and managing AI, and their answers would promote ethical design of AI in intelligence, states the framework.
The framework breaks down its questions into 10 query areas. “Mitigating Undesired Bias and Ensuring Objectivity” is the area that receives the greatest focus, with three paragraphs defining its importance and six questions for AI implementers. Other areas focus on understanding goals and risks; legal obligations and policy considerations governing AI data; human judgment and accountability; AI testing; accounting for AI builds, versions and evolution; documentation of purpose, parameters, limitations and design outcomes; and transparency, to include explainability and interpretability.
One area that offers little text but plays a significant role in virtually every aspect of intelligence AI is stewardship and accountability, particularly for training data, algorithms, models, output of models and documentation. The framework declares that, before AI is deployed, it must be clear who will have the responsibility for the continued maintenance, monitoring, updating and decommissioning of the AI. Questions posed largely break down this issue into more specific areas, including who is accountable for the ethical considerations during all stages of the AI lifecycle. This area also asks who will be responsible for addressing concerns that AI no longer meets this ethical framework, and which actions that person could, or will, undertake to remedy the shortcoming. These actions could range from modifying or limiting the use of AI or even stopping it completely, the framework offers.