ARTIFICIAL INTELLIGENCE PRINCIPLES & ETHICS

AI’s rapid advancement and innovation potential across a range of fields is incredibly exciting. Yet a thorough and open discussion around AI ethics, and the principles organisations using this technology must consider, is urgently needed.

Dubai’s Ethical AI Toolkit has been created to provide practical help across a city ecosystem. It supports industry, academia and individuals in understanding how AI systems can be used responsibly. It consists of principles and guidelines, and a self-assessment tool for developers to assess their platforms.

Our aim is to offer unified guidance that is continuously improved in collaboration with our communities. The eventual goal is to reach widespread agreement and adoption of commonly-agreed policies to inform the ethical use of AI not just in Dubai but around the world.

Try AI Ethics Self-Assessment Tool
ARTIFICIAL INTELLIGENCE PRINCIPLES & ETHICS

AI Principles

Our vision is for Dubai to excel in the development and use of AI in ways that both boost innovation and deliver human benefit and happiness.

Ethics

Ethics

AI systems should be fair, transparent, accountable and understandable
Security

Security

AI systems should be safe and secure, and should serve and protect humanity
Humanity

Humanity

AI should be beneficial to humans and aligned with human values, in both the long and short term
Inclusiveness

Inclusiveness

AI should benefit all people in society, be governed globally, and respect dignity and people rights

AI ETHICS GUIDELINES

Our AI Ethical Guidelines expand on Dubai’s AI Principle about Ethics dealing with fairness, transparency, accountability and explainability.

Fair

Fair

Demographic fairness Fairness in design Fairness in data Fairness in algorithms Fairness in outcomes
Accountable

Accountable

Apportionment of accountabilities Accountable measures for mitigating risks Appeals procedures and contingency plans
Transparent

Transparent

Identifiable by humans Traceability of cause of harm Auditability by public
Explainable

Explainable

Process explainability Outcomes explainability Explainability in non-technical terms Channels of explanation

TARGET USERS

Dubai’s Ethical AI Toolkit is particularly helpful for three main types of user:

1. Government Entities - that procure or internally develop AI technologies to provide services for the population

2. Private Sector Entities – Startups and enterprises large and small that provide AI systems to government entities or their customer base

3. Individuals - Anyone with an interest in how ethical AI is applied in society and city service settings

APPROACH & FUTURE DEVELOPMENT

APPROACH & FUTURE DEVELOPMENT

Our approach is open source. We are actively encouraging the application of an ethics filter to AI systems. Feedback from the across the AI community, combined with our continual research, will help iterate Dubai’s Ethical AI Toolkit so that the framework and guidance keep pace with technological advancement. The short-term aim is to create a trusted platform for furthering discussion and policy development around AI ethics. The longer term might see us provide advisory and audit services to help with the application, maintenance and governance of ethical AI systems. Our guidelines will also be reviewed annually by our Advisory Board, which comprises leading AI and ethics experts from the private and public sectors. Eventually, we hope to see these guidelines inform regulations guiding AI development and use.

FAQ

Have a Question?
  • AI ethics is a growing topic of discussion on the international stage. Governments, NGOs and companies are trying to understand how they can develop AI systems in an ethical way to avoid harm to individuals while also safeguarding themselves from reputational and legal damage. There is an understanding that regulation may be needed at some stage, but the field is not yet sufficiently mature. The pace of advancement is also too rapid to be able to codify the ecosystem within a regulatory framework.

    There is still a need for guidance and collaboration, both at individual and organisational level. For regulators, the challenge is to start conversations about how to regulate this emerging technology without stifling innovation and advancement. There is also the need to begin forming a unified view of best practices in AI development, and to offer clarity on ethical frameworks that can inform this development. Dubai wishes to be part of this global conversation, and wants to establish itself as a thought leader in AI adoption for both public and private sectors. We want to create a platform, and to offer resources, that can spark meaningful debate. Our toolkit is a collaborative work, and is designed to draw in stakeholders both local and global in its refinement and evolution.

    Apart from theoretical elements and principles, the Dubai AI Ethics Toolkit also offers resources designed to be clear, accessible and practical in implementation so that those involved in the development and use of AI systems can already begin to voluntarily benchmark themselves against best practice.

  • Artificial intelligence (AI) is the capability of a functional unit to perform functions that are generally associated with human intelligence such as reasoning, learning and self-improvement.

    An AI system is a product, service, process or decision-making methodology whose operation or outcome is materially influenced by artificially intelligent functional units. A particular feature of AI systems is that they learn behaviour and rules not explicitly programmed. Importantly, it is not necessary for a system’s outcome to be solely determined by artificially intelligent functional units in order for the system to be defined as an artificially intelligent system. Simply put, hybrid systems with both conventional and AI capabilities would still qualify as AI systems. If your system has any AI component within it, the overall system can be considered an AI system.

  • Insofar as our remit is concerned, ethics cover the concepts of fairness, accountability, transparency and explainability (FATE). To keep things manageable and the discussion focused, our ethics model does not currently include privacy concerns, model accuracy (except insofar is fairness and redress are concerned), employment, or any other AI-related issues besides FATE.

    1. Dubai’s AI Principles - High level, non-statutory (not legally binding), non-audited set of statements that indicate how we want to develop AI in Dubai. There are 4 overarching principles with sub-principles under each of them.

    2. Dubai’s Ethical AI Guidelines - Like the Principles, the Guidelines are also non-statutory and non-audited. They are more practical and apply to specific sets of actors. Only Principle 3 (Ethics) has currently been expanded into a set of guidelines. The other principles may be developed into guidelines in time, though not necessarily by Smart Dubai. We’re happy for the community to take this on.

    3. Self-Assessment Tool – Designed to translate the guidelines into practical application and assessment. The self-assessment tool allows entities to first classify their AI system, and then assess their ethical score based on the system type, and can be used by both AI developers and client organisations. It is a useful tool for inclusion in RFPs by AI client organisations to ensure that vendor solutions meet ethical criteria. Best practice calls for AI operators to take the lead in ensuring that the , the self-assessment tool questions have been answered correctly with relevant evidence.

    4. Pointers to key literature for technical experts - This document aims to help technical experts (e.g. data scientists, machine learning engineers, AI engineers) investigate how to apply ethics to AI systems. This is a work in progress given that ethics in AI is a rapidly evolving field with further advances imminent. AI developers can evaluate for themselves the suitability and risks of each decision-making method.

  • The Dubai AI Principles and the guidelines derived from them are not mandatory, audited or legally binding. The aim is to create a platform for conversation around ethics in AI systems, with developers and AI system operators choosing to voluntarily abide by the guidance provided. Eventually, with community buy-in, these principles and guidelines might serve as a basis for AI system regulation when it is introduced.
  • Participation can be beneficial for AI developer and operator organisations for several reasons. By contributing, entities and individuals can:

    • Self-assess and mitigate unintentional unethical behaviour by their AI systems that might otherwise lead to public backlash, reputational damage or even legal liability

    • Gain a clear understanding of the meaning of ethics in AI, and communicate this effectively to stakeholders and customers – thereby increase trust in AI systems and improving acceptance

    • Contribute to the development of a unified view on best practices in AI development

    • Become part of a conversation that is helping shape best practices that might eventually form the basis of formal regulations

  • The toolkit is designed to be useful for AI developer and operator organisations across the public and private sectors. In the near term however, only public sector entities, together with private companies who develop AI systems for them, are actively expected to use the toolkit.

    The guidelines refer to ‘AI developer organisations’ and ‘AI operator organisations’. The definitions of these are as follows:

     

    • An AI developer organisation is an organisation which does any of the following:

    • Determine the purpose of an AI system;

    • Design an AI system;

    • Build an AI system, or;

    • Perform technical maintenance or tuning on an AI system

    N.B. The definition applies regardless of whether the organisation is the ultimate user of the system, or whether they sell it on or give it away

    • An AI operator organisation is an organisation which does any of the following:

    • Use AI systems in operations, backroom processes or decision-making

    • Use an AI system to provide a service to end-user

    • Is a business owner of an AI system

    • Procure and treat data for use in an AI system, or;

    • Evaluate the use case for an AI system and decide whether to proceed

    N.B. This definition applies regardless of whether the AI system was developed in-house or procured.

    It is possible for organisations to be both an AI developer organisation and an AI operator organisation.

  • AI already surrounds us, but some applications are more visible and sensitive than others. This toolkit is applicable only to those AI systems that make or inform ‘significant decisions’ - decisions that can have significant impact on individuals or society as a whole. It also applies to ‘critical decisions’, which are a subset of significant decisions. Full definitions are available in the guidelines.

  • The Dubai AI Principles serve as the foundation for the use of AI in Dubai. Entities should consider them while developing and operating AI systems and when setting strategy.

    The Dubai AI guidelines can be applied by integrating them with the entity’s existing policies, standards and other documentation. It is important that they are used throughout the development and deployment process in order to achieve ethical design, rather than being an afterthought. Employees should be educated about the meaning and importance of ethical design throughout the process, and the guidelines can act as an educational document in this case.

    The self-assessment tool can be used directly by AI developer organisations to ensure that they meet the standards expected of public sector AI systems. AI operator organisations can include parts of the self-assessment tool in their RFPs if they work with vendors. AI operator organisations, as actors who ultimately deploy AI systems to serve people, are responsible for ensuring that adequate ethical standards have been met by internal development teams, and by the vendors they choose to do business with.

  • We’re giving you an example of a process, but in no way suggesting that this is the only way to start using the Ethical AI Toolkit. Our resources, guidelines and principles are designed to be flexible, and to support rather than instruct. We’re happy to leave it to individual project teams to decide how best to fit our Ethical AI Toolkit within their processes, and welcome feedback and suggestions.

    The teams most likely to be involved in using the Ethical AI Toolkit are strategy, compliance and technical/development. You might want to consider the following elements when adopting the toolkit:

     

    • Identify a responsible person for the toolkit internally. It’s best that your lead and point of contact have sufficient influence to be able to change processes where necessary.

    • Align internal AI strategy with the principles and guidelines.

    • Assess how the guidelines align with existing regulations, internal policies, guidelines and standards applicable to your organisation and the wider environment

    • Then:

      • For AI developer organisations: Assess the impact of the toolkit on the design, development and testing processes. Identify where the self-assessment tool can be introduced and used effectively.

      • For AI operator organisations: Investigate how the toolkit can inform your procurement process and AI system operation. You might want to introduce the self-assessment tool into your RFP and use the resulting score as a criterion in evaluating vendors. Then, analyse changes and optimisations to your testing processes. Also determine whether to test vendors on their self-assessed ethics score.

    • Define ongoing processes and responsibilities, because monitoring and control structures are crucial to continuity

    • Set up Ethical AI results reporting

    • Choose to make self-assessment tool results accessible on the organisation’s website, or formalise an ethical declaration for your AI system that takes into account key areas of consideration from the principles and guidelines

    • Organise educational sessions on ethical design for employees.

  • We identified 7 AI policy challenge areas, which we addressed fully with the 4 Dubai AI Principles. Principle 3 (Ethics) is both mature enough to be developed further at present, and falls within our organisation’s mandate.

  • The toolkit ultimately serves the people and end-users who depend on, or are affected by, AI systems. The term used in the toolkit is ‘AI subject’. An AI subject is a natural person who is any of the following:

    • An end-user of an AI system

    • Directly affected by the operation of or outcomes of an AI system, or;

    • A recipient of a service or recommendation provided by an AI system

    The toolkit helps ensure that people are treated fairly and are able to challenge decisions they perceive to be unfair, and that they have access to important information delivered in understandable manner about how AI affects their lives.

  • The rest of the principles (Humanity, Inclusiveness and Security) will be developed further in partnership with other government entities, and with contributions from the wider community.

  • Some guidelines and self-assessment tool items related to them have been suspended for the time as they are a work in process and may not meet key quality criteria. Some of the suspended guidelines relate to external auditing mechanisms that have not yet been established. We are continually iterating and improving these guidelines. You do not need to apply suspended guidelines to your work. However, we would value suggestions on improving these guidelines so they can be brought out of suspended status.

  • The assessment of the AI system’s impact on end users is ultimately up to the judgement of individual entities and project owners. We suggest that you start by creating a list of the audiences who might be impacted by your AI system. Our self-assessment tool is currently geared towards AI systems that serve and affect the public. For internal AI systems, the call is yours to make whether a comprehensive self-assessment is needed, and the benchmarks that should be applied. Our toolkit is designed to provide guidance and assistance, and spark a discussion around ethical AI use. This is a collaborative process, and we value your input, feedback and suggestions. We will be improving the toolkit based on your feedback to include as many potential use cases as possible.

  • The toolkit is designed to identify and help fix performance gaps, so a low initial score just means that there are ethical concerns to consider. The first step is to check which guidelines are being highlighted as alerts. Then, you can explore possible mitigation measures. It’s best to embed the guidelines into the design, implementation and end use process from the outset to minimise gaps.

    Remember that the principles and guidelines for the self-assessment tool are recommendations as opposed to compulsory criteria. Recommendation strength also varies. The guidelines are meant for self-assessment, and your results will not be audited, checked or regulated at present. They are also a collaborative work in process where your feedback and suggestions for improvement are invaluable. We would appreciate you sharing them through the feedback tool in the website.

    Nevertheless, we suggest that you not proceed with system implementation before it is able to demonstrate a certain level of ethics performance. This avoids costly compliance at later stages. We recommend that you consider the guidelines throughout your project, rather than benchmarking at the very end.

    Finally, if some of the mitigation measures suggested are either not applicable to you, or should be answered by another stakeholder, you should highlight areas of concern and make sure the message is communicated so that mitigation measures can become part of your project’s later stages.

  • The tool is used for self-assessment purposes only and will not be audited, checked or regulated. It’s also a collaborative work in process where your feedback and suggestions for improvement are invaluable. Please share them through the feedback tool in the website. We may however collect some high-level metadata relating to toolkit adoption. The specific feedback and suggestions you offer will be used to iterate and improve the toolkit

  • That is entirely possible. The mitigation measures are not exhaustive and might not be applicable to every use case. They are offered as baseline examples to start a conversation around AI ethics, and offer direction and guidance. If some guidelines are not applicable to your particular use case, you can offer your own mitigation measures in the explanation tab provided.

    We’ve deliberately ensured that the performance level for the guidelines is self-assessed, and does not depend on the answers you’ve offered to the mitigation measures and additional questions. This allows you to answer “no” to any mitigation measures that do not apply to you, and still obtain a self-judged performance score – either full or partial.

    If you are an AI Operator or AI Developer who finds that some guidelines are applicable not to you but to your vendor or client, it’s best to mark them as “Not Applicable” and then highlight those guidelinesto your vendor or client.

    If some of the guidelines are relevant to later stages of your project, it’s best to take note of those guidelines and keep them in consideration as the project progresses.

  • For comprehensive definitions, please refer to the AI Ethics Guidelines document’s definitions section. For an overview of how fairness, accountability, transparency and explainability in AI systems can be achieved, you might want to refer to the “Pointers to key resources for