Don’t Let Ai Surprise You In A Negative Way
Introduction The business opportunity for Artificial Intelligence (AI) is huge. According to PWC, AI will generate $1.7 trillion of value across sectors such as healthcare, automotive, finance, transportation & logistics, ICT, entertainment, retail, energy and manufacturing. With this increasing uptake, there are also increasingly more examples of negative consequences of this powerful technologysuch as unfairdiscrimination and opaque algorithmic decisions.
What can you do as an organization that has just started enjoying the benefits of AI, but that doesn’t what get caught up in one of those negative consequences? Read on and find out how AI will not surprise you in a negative way.
Choose the right AI Principles for your organization
The news stories of several negative consequences of AI (mostly unintended, such as COMPAS, Apple Card, Amazon, Dutch Government) has led to a proliferation of voluntary ethical guidelines or AI Principles through which organizations publicly declare that they want to use AI in a fair, transparent, safe, robust, humancentric, etc. way, avoiding any negative consequences or harm. Harvard University analyzed the AI Principles of the first 36 organizations in the world that published such guidelines and found 8+1 most used categories, including human values, professional responsibility, human control, fairness & non-discrimination, transparency & explainability, safety & security, accountability, privacy & human rights. The non-profit organization Algorithm Watch maintains an open inventory of AI Guidelines with currently over 160 organizations It is therefore not easy for organizations to decide what AI principles to adopt. Here are four considerations that organizations can use for choosing the AI Principles appropriate for their business and vision:
1. Distinguish between, on the one hand, principles relevant for governments, such as the future of work, lethal autonomous weapon systems, liability, concentration of power & wealth, and, on the other hand, principles that individual organizations can act on, such as privacy, security, fairness and transparency.
2. Distinguish between intended and unintended consequences. Many challenges of the use of AI are occurring as an unintended side effect of the technology (e.g. bias, lack of explainability, future of work). Intended consequences are explicit decisions that can be controlled, such using AI for good or for bad. Organizations better formulate their principles for the unintended consequences.
3. Consider whether the AI Principles should cover all aspects relevant for AI systems (e.g. safety, privacy, security, fairness, human agency, etc.) in an end- to-end manner, versus covering only AI-specific challenges (e.g. fairness, explainability, human agency).
4. Consider the specific sector you are operating in. For example, using AI in the aviation sector will put high value on safety, whereas the insurance sector will need to put high value on fairness, and the medical sector on explainability.
As an example, consider Telefonica’s AI Principles, which state that the use of AI should be fair, transparent & explainable, human-centric, with privacy & security. Moreover,they also apply to providers of AI solutions.
Implement a methodology for the responsible use of AI Once the appropriate AI Principles have been defined, they need to be implemented, i.e., become part of “business as usual” (BAU). This can be done by applying a methodology called “Responsible AI by Design”,which has five ingredients and is used in Telefonica.
• The first are the AI Principleswhich provide the values and normsof how and for what AI can be used. • Second, it is important to provide training to employees explaining all relevant aspects.
• Third, when designing, developing or buying AI systems, employees need to complete an online questionnaire with a set of questions and recommendations corresponding to each principle.
• Fourth, tools are important for supporting automatic checking of bias in the data, for mitigating potentially discriminatory algorithmic outcomes, for finding proxy variables to sensitive variables, for creating explainable AI for backbox algorithms, and for data anonymization. There are increasingly more open source tools available such as AI Fairness 360 and InterpretML.
•Fifth, agovernance model defines the responsibilities and the escalation process when the questionnaire reveals issues. We identify a new role called Responsible AI Champion who is knowledgeable about the area, is available for fellow employees in a given geography or business unit, and provides awareness, advice, assistance and escalation if needed. Champions are also crucial to turn new practices into BAU, and as such are agents of chance. In particular, the responsibilities of a Responsible AI Champion are to inform, educate, advice & escalate, coordinate, connect and manage change (see figure).
Artificial Intelligence creates a huge number of new business opportunities but sometimes also leads to undesired consequences. In order to minimize the likelihood of such a negative surprise, it is important to adopt ethical AI principles and to implement them in your business practices. We hope that the two-step approach described here helps organizations to be better prepared for preventing such negative consequences, and therefore to maximally enjoy the benefits of AI