Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.
There is little doubt that AI is changing the business landscape and providing competitive advantages tothose that embrace it. It is time, however, to move beyond the simple implementation of AI and to ensure that AI is being done in a safe and ethical manner. This is called responsible AI and will serve not only as a protection against negative consequences, but also as a competitive advantage in and of itself.
What is responsible AI?
Responsible AI is a governance framework that covers ethical, legal, safety, privacy, and accountabilityconcerns. Although the implementation of responsible AI varies by company, the necessity of it is clear. Without responsible AI practices in place, a company is exposed to serious financial, reputational, andlegal risks. On the positive side, responsible AI practices are becoming prerequisites to even bidding on certain contracts, especially when governments are involved; a well-executed strategy will greatly help in winning those bids. Additionally, embracing responsible AI can contribute to a reputational gain to the company overall.
Values by design
Much of the problem implementing responsible AI comes down to foresight. This foresight is the abilityto predict what ethical or legal issues an AI system could have during its development and deploymentlifecycle. Right now, most of the responsible AI considerations happen after an AI product isdeveloped — a very ineffective way to implement AI. If you want to protect your company from financial,legal, and reputational risk, you have to start projects with responsible AI in mind. Your company needsto have values by design, not by whatever you happen to end up with at the end of a project.
Implementing values by design
Responsible AI covers a large number of values that need to be prioritized by company leadership. Whilecovering all areas is important in any responsible AI plan, the amount of effort your company expends ineach value is up to company leaders. There has to be a balance between checking for responsible AIand actually implementing AI. If you expend too much effort on responsible AI, your effectiveness maysuffer. On the other hand, ignoring responsible AI is being reckless with company resources. The bestway to combat this trade off is starting off with a thorough analysis at the onset of the project, and notas an after-the-fact effort.
Best practice is to establish a responsible AI committee to review your AI projects before theystart, periodically during the projects, and upon completion. The purpose of this committee is to evaluate the project against responsible AI values and approve, disapprove, or disapprove with actions to bring the project in compliance. This can include requesting more information be obtained or things that need to be changed fundamentally. Like an Institutional Review Board that is used to monitor ethics in biomedical research, this committee should contain both experts in AI and non-technicalmembers. The non-technical members can come from any background and serve as a reality check on the AI experts. AI experts, on the other hand, may better understand the difficulties and remediations possible but can become too used to institutional and industry norms that may not be sensitive enoughto concerns of the greater community. This committee should be convened at the onset of the project,periodically during the project, and at the end of the project for final approval.
What values should the Responsible AI Committee consider?
Values to focus on should be considered by the business to fit within its overall mission statement.Your business will likely choose specific values to emphasize, but all major areas of concern should becovered. There are many frameworks you can choose to use for inspiration such as Google’s and Facebook’s. For this article, however, we willbe basing the discussion on the recommendations set forth by the High-Level Expert Group on ArtificialIntelligence Set Up by The European Commission in The Assessment List for Trustworthy ArtificialIntelligence. These recommendations include seven areas. We will explore each area and suggestquestions to be asked regarding each area.
1. Human agency and oversight
AI projects should respect human agency and decision making. This principle involves how the AIproject will influence or support humans in the decision making process. It also involves how thesubjects of AI will be made aware of the AI and put trust in its outcomes. Some questions that need tobe asked include:
2. Technical robustness and safety
Technical robustness and safety require that AI projects preemptively address concerns around risks associated with the AI performing unreliably and minimize the impact of such. The results of the AI project should include the ability of the AI to perform predictably and consistently, and it should cover the need of the AI to be protected from cybersecurity concerns. Some questions that need to be askedinclude:
3. Privacy and data governance
AI should protect individual and group privacy, both in its inputs and its outputs. The algorithm should not include data that was gathered in a way that violates privacy, and it should not give results that violate the privacy of the subjects, even when bad actors are trying to force such errors. In order to do this effectively, data governance must also be a concern. Appropriate questions to ask include:
4. Transparency
Transparency covers concerns about traceability in individual results and overall explainability of AI algorithms. The traceability allows the user to understand why an individual decision was made.Explainability refers to the user being able to understand the basics of the algorithm that was used tomake the decision. It also refers to the ability of the user to understand what factors where involved inthe decision making process for their specific prediction. Questions to ask are:
5. Diversity, non-discrimination
In order to be considered responsible AI, the AI project must work for all subgroups of people as well as possible. While AI bias can rarely be eliminated entirely, it can be effectively managed. This mitigation can take place during the data collection process — to include a more diverse background of people in the training dataset — and can also be used at inference time to help balance accuracy between differentgroupings of people. Common questions include:
6. Societal and environmental well-being
An AI project should be evaluated in terms of its impact on the subjects and users along with its impact on the environment. Social norms such as democratic decision making, upholding values, and preventing addiction to AI projects should be upheld. Furthermore the results of the decisions of the AI project on the environment should be considered where applicable.One factor applicable in nearly all cases is an evaluation of the amount of energy needed to train the required models. Questions that can be asked:
7. Accountability
Some person or organization needs to be responsible for the actions and decisions made by the AIproject or encountered during development. There should be a system to ensure adequate possibility ofredress in cases where detrimental decisions are made. There should also be some time and attention paid to risk management and mitigation. Appropriate questions include:
The bottom line
The seven values of responsible AI outlined above provide a starting point for an organization’s responsible AI initiative. Organizations who choose that pursue responsible AI will find they increasingly have access to more opportunities — such as bidding on government contracts. Organizations that don’t implement these practices expose themselves to legal, ethical, and reputational risks.
David Ellison is Senior AI Data Scientist at Lenovo.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More