ARTIFICIAL INTELLIGENCE ETHICS GUIDELINES
ARTIFICIAL INTELLIGENCE ETHICS GUIDELINES

WHAT ARE THE DUBAI AI ETHICS GUIDELINES?

The Dubai AI Ethics Guidelines relate to Ethics principle in Dubai AI Principles.They offer tangible suggestions to help stakeholders adhere to the principle. They deliver detailed guidance for the crucial issues of Fairness, Accountability, Transparency and Explainability of the algorithms at the heart of AI systems.

We would like to see the Dubai AI Ethics Guidelines evolve into a universal, practical and applicable framework informing ethical requirements for AI design and use. This is a collaborative process where all stakeholders are invited to be part of the dialogue. We value your contribution towards refining, expanding and evolving these guidelines.

Fair Fair

We will make AI systems fair

1. Consideration should be given to whether the data ingested is representative of the affected population

2. Consideration should be given to whether decision-making processes introduce bias

3. Significant decisions informed by the use of AI should be fair

4. AI operator organisations should consider whether their AI systems are accessible and usable in a fair manner across user groups

5. Consideration should be given to the effect of diversity on the development and deployment processes

Accountable Accountable

We will make AI systems accountable

1. Accountability for the outcomes of an AI system should not lie with the system itself

2. Positive efforts should be made to identify and mitigate any significant risks inherent in the AI systems designed

3. SUSPENDED - AI systems informing critical decisions should be subject to appropriate external audit

4. AI subjects should be able to challenge significant automated decisions concerning them and, where appropriate, be able to opt out of such decisions

5. AI systems informing significant decisions should not attempt to make value judgements on people’s behalf without prior consent

6. AI systems informing significant decisions should be developed by diverse teams with appropriate backgrounds

7. AI operator organisations should understand the AI systems they use sufficiently to assess their suitability for the use case and to ensure accountability and transparency

Transparent Transparent

We will make AI systems transparent

1. Traceability should be considered for significant decisions, especially those that have the potential to result in loss, harm or damage

2. People should be informed of the extent of their interaction with AI systems

Explainable Explainable

We will make AI systems as explainable as technically possible

1. AI operator organisations could consider providing affected AI subjects with a high level explanation of how their AI system works

2. AI operator organisations should consider providing affected AI subjects with a means to request explanations for specific significant decisions, to the extent possible given the state of present research and the choice of model

3. In the case that such explanations are available, they should be easily and quickly accessible, free of charge and user-friendly

AI Ethics Self-Assessment Tool

Dubai AI Ethics self-assessment tool is built to enable AI developer organisations or AI operator organisations to self-evaluate the ethics level of an AI system using Dubai’s AI Ethics Guidelines. Try it out and give us your feedback.

Try AI Ethics Self-Assessment Tool

Contribute to the future development of the Dubai AI Ethics Guidelines

The Dubai AI Ethics Guidelines, along with other elements of the Dubai Ethical AI Toolkit, are designed to inspire and inform future AI behaviour. Indeed, collaboration from all stakeholders is vital in ensuring their sustainability and usefulness. Your feedback is therefore warmly appreciated.

Share feedback

FAQ

Have a Question?
  • AI ethics is a growing topic of discussion on the international stage. Governments, NGOs and companies are trying to understand how they can develop AI systems in an ethical way to avoid harm to individuals while also safeguarding themselves from reputational and legal damage. There is an understanding that regulation may be needed at some stage, but the field is not yet sufficiently mature. The pace of advancement is also too rapid to be able to codify the ecosystem within a regulatory framework.

    There is still a need for guidance and collaboration, both at individual and organisational level. For regulators, the challenge is to start conversations about how to regulate this emerging technology without stifling innovation and advancement. There is also the need to begin forming a unified view of best practices in AI development, and to offer clarity on ethical frameworks that can inform this development. Dubai wishes to be part of this global conversation, and wants to establish itself as a thought leader in AI adoption for both public and private sectors. We want to create a platform, and to offer resources, that can spark meaningful debate. Our toolkit is a collaborative work, and is designed to draw in stakeholders both local and global in its refinement and evolution.

    Apart from theoretical elements and principles, the Dubai AI Ethics Toolkit also offers resources designed to be clear, accessible and practical in implementation so that those involved in the development and use of AI systems can already begin to voluntarily benchmark themselves against best practice.

  • Artificial intelligence (AI) is the capability of a functional unit to perform functions that are generally associated with human intelligence such as reasoning, learning and self-improvement.

    An AI system is a product, service, process or decision-making methodology whose operation or outcome is materially influenced by artificially intelligent functional units. A particular feature of AI systems is that they learn behaviour and rules not explicitly programmed. Importantly, it is not necessary for a system’s outcome to be solely determined by artificially intelligent functional units in order for the system to be defined as an artificially intelligent system. Simply put, hybrid systems with both conventional and AI capabilities would still qualify as AI systems. If your system has any AI component within it, the overall system can be considered an AI system.

  • Insofar as our remit is concerned, ethics cover the concepts of fairness, accountability, transparency and explainability (FATE). To keep things manageable and the discussion focused, our ethics model does not currently include privacy concerns, model accuracy (except insofar is fairness and redress are concerned), employment, or any other AI-related issues besides FATE.

    1. Dubai’s AI Principles - High level, non-statutory (not legally binding), non-audited set of statements that indicate how we want to develop AI in Dubai. There are 4 overarching principles with sub-principles under each of them.

    2. Dubai’s Ethical AI Guidelines - Like the Principles, the Guidelines are also non-statutory and non-audited. They are more practical and apply to specific sets of actors. Only Principle 3 (Ethics) has currently been expanded into a set of guidelines. The other principles may be developed into guidelines in time, though not necessarily by Smart Dubai. We’re happy for the community to take this on.

    3. Self-Assessment Tool – Designed to translate the guidelines into practical application and assessment. The self-assessment tool allows entities to first classify their AI system, and then assess their ethical score based on the system type, and can be used by both AI developers and client organisations. It is a useful tool for inclusion in RFPs by AI client organisations to ensure that vendor solutions meet ethical criteria. Best practice calls for AI operators to take the lead in ensuring that the , the self-assessment tool questions have been answered correctly with relevant evidence.

    4. Pointers to key literature for technical experts - This document aims to help technical experts (e.g. data scientists, machine learning engineers, AI engineers) investigate how to apply ethics to AI systems. This is a work in progress given that ethics in AI is a rapidly evolving field with further advances imminent. AI developers can evaluate for themselves the suitability and risks of each decision-making method.

  • The Dubai AI Principles and the guidelines derived from them are not mandatory, audited or legally binding. The aim is to create a platform for conversation around ethics in AI systems, with developers and AI system operators choosing to voluntarily abide by the guidance provided. Eventually, with community buy-in, these principles and guidelines might serve as a basis for AI system regulation when it is introduced.
  • Participation can be beneficial for AI developer and operator organisations for several reasons. By contributing, entities and individuals can:

    • Self-assess and mitigate unintentional unethical behaviour by their AI systems that might otherwise lead to public backlash, reputational damage or even legal liability

    • Gain a clear understanding of the meaning of ethics in AI, and communicate this effectively to stakeholders and customers – thereby increase trust in AI systems and improving acceptance

    • Contribute to the development of a unified view on best practices in AI development

    • Become part of a conversation that is helping shape best practices that might eventually form the basis of formal regulations

  • The toolkit is designed to be useful for AI developer and operator organisations across the public and private sectors. In the near term however, only public sector entities, together with private companies who develop AI systems for them, are actively expected to use the toolkit.

    The guidelines refer to ‘AI developer organisations’ and ‘AI operator organisations’. The definitions of these are as follows:

     

    • An AI developer organisation is an organisation which does any of the following:

    • Determine the purpose of an AI system;

    • Design an AI system;

    • Build an AI system, or;

    • Perform technical maintenance or tuning on an AI system

    N.B. The definition applies regardless of whether the organisation is the ultimate user of the system, or whether they sell it on or give it away

    • An AI operator organisation is an organisation which does any of the following:

    • Use AI systems in operations, backroom processes or decision-making

    • Use an AI system to provide a service to end-user

    • Is a business owner of an AI system

    • Procure and treat data for use in an AI system, or;

    • Evaluate the use case for an AI system and decide whether to proceed

    N.B. This definition applies regardless of whether the AI system was developed in-house or procured.

    It is possible for organisations to be both an AI developer organisation and an AI operator organisation.

  • AI already surrounds us, but some applications are more visible and sensitive than others. This toolkit is applicable only to those AI systems that make or inform ‘significant decisions’ - decisions that can have significant impact on individuals or society as a whole. It also applies to ‘critical decisions’, which are a subset of significant decisions. Full definitions are available in the guidelines.

  • The Dubai AI Principles serve as the foundation for the use of AI in Dubai. Entities should consider them while developing and operating AI systems and when setting strategy.

    The Dubai AI guidelines can be applied by integrating them with the entity’s existing policies, standards and other documentation. It is important that they are used throughout the development and deployment process in order to achieve ethical design, rather than being an afterthought. Employees should be educated about the meaning and importance of ethical design throughout the process, and the guidelines can act as an educational document in this case.

    The self-assessment tool can be used directly by AI developer organisations to ensure that they meet the standards expected of public sector AI systems. AI operator organisations can include parts of the self-assessment tool in their RFPs if they work with vendors. AI operator organisations, as actors who ultimately deploy AI systems to serve people, are responsible for ensuring that adequate ethical standards have been met by internal development teams, and by the vendors they choose to do business with.

  • We’re giving you an example of a process, but in no way suggesting that this is the only way to start using the Ethical AI Toolkit. Our resources, guidelines and principles are designed to be flexible, and to support rather than instruct. We’re happy to leave it to individual project teams to decide how best to fit our Ethical AI Toolkit within their processes, and welcome feedback and suggestions.

    The teams most likely to be involved in using the Ethical AI Toolkit are strategy, compliance and technical/development. You might want to consider the following elements when adopting the toolkit:

     

    • Identify a responsible person for the toolkit internally. It’s best that your lead and point of contact have sufficient influence to be able to change processes where necessary.

    • Align internal AI strategy with the principles and guidelines.

    • Assess how the guidelines align with existing regulations, internal policies, guidelines and standards applicable to your organisation and the wider environment

    • Then:

      • For AI developer organisations: Assess the impact of the toolkit on the design, development and testing processes. Identify where the self-assessment tool can be introduced and used effectively.

      • For AI operator organisations: Investigate how the toolkit can inform your procurement process and AI system operation. You might want to introduce the self-assessment tool into your RFP and use the resulting score as a criterion in evaluating vendors. Then, analyse changes and optimisations to your testing processes. Also determine whether to test vendors on their self-assessed ethics score.

    • Define ongoing processes and responsibilities, because monitoring and control structures are crucial to continuity

    • Set up Ethical AI results reporting

    • Choose to make self-assessment tool results accessible on the organisation’s website, or formalise an ethical declaration for your AI system that takes into account key areas of consideration from the principles and guidelines

    • Organise educational sessions on ethical design for employees.

  • We identified 7 AI policy challenge areas, which we addressed fully with the 4 Dubai AI Principles. Principle 3 (Ethics) is both mature enough to be developed further at present, and falls within our organisation’s mandate.

  • The toolkit ultimately serves the people and end-users who depend on, or are affected by, AI systems. The term used in the toolkit is ‘AI subject’. An AI subject is a natural person who is any of the following:

    • An end-user of an AI system

    • Directly affected by the operation of or outcomes of an AI system, or;

    • A recipient of a service or recommendation provided by an AI system

    The toolkit helps ensure that people are treated fairly and are able to challenge decisions they perceive to be unfair, and that they have access to important information delivered in understandable manner about how AI affects their lives.

  • Some guidelines and self-assessment tool items related to them have been suspended for the time as they are a work in process and may not meet key quality criteria. Some of the suspended guidelines relate to external auditing mechanisms that have not yet been established. We are continually iterating and improving these guidelines. You do not need to apply suspended guidelines to your work. However, we would value suggestions on improving these guidelines so they can be brought out of suspended status.

  • The assessment of the AI system’s impact on end users is ultimately up to the judgement of individual entities and project owners. We suggest that you start by creating a list of the audiences who might be impacted by your AI system. Our self-assessment tool is currently geared towards AI systems that serve and affect the public. For internal AI systems, the call is yours to make whether a comprehensive self-assessment is needed, and the benchmarks that should be applied. Our toolkit is designed to provide guidance and assistance, and spark a discussion around ethical AI use. This is a collaborative process, and we value your input, feedback and suggestions. We will be improving the toolkit based on your feedback to include as many potential use cases as possible.

  • The toolkit is designed to identify and help fix performance gaps, so a low initial score just means that there are ethical concerns to consider. The first step is to check which guidelines are being highlighted as alerts. Then, you can explore possible mitigation measures. It’s best to embed the guidelines into the design, implementation and end use process from the outset to minimise gaps.

    Remember that the principles and guidelines for the self-assessment tool are recommendations as opposed to compulsory criteria. Recommendation strength also varies. The guidelines are meant for self-assessment, and your results will not be audited, checked or regulated at present. They are also a collaborative work in process where your feedback and suggestions for improvement are invaluable. We would appreciate you sharing them through the feedback tool in the website.

    Nevertheless, we suggest that you not proceed with system implementation before it is able to demonstrate a certain level of ethics performance. This avoids costly compliance at later stages. We recommend that you consider the guidelines throughout your project, rather than benchmarking at the very end.

    Finally, if some of the mitigation measures suggested are either not applicable to you, or should be answered by another stakeholder, you should highlight areas of concern and make sure the message is communicated so that mitigation measures can become part of your project’s later stages.

  • The tool is used for self-assessment purposes only and will not be audited, checked or regulated. It’s also a collaborative work in process where your feedback and suggestions for improvement are invaluable. Please share them through the feedback tool in the website. We may however collect some high-level metadata relating to toolkit adoption. The specific feedback and suggestions you offer will be used to iterate and improve the toolkit