Data-driven, predictive technologies continue to gain relevance in global development. Artificial intelligence (AI) and machine learning can improve targeting of limited resources, inform more effective interventions, and increase efficiencies through automation. In global health, for example, machine-learning approaches are enabling more precise identification of and support for the patients most likely to miss critical follow-up visits, improving program performance. At the same time, global stakeholders are increasingly calling attention to AI ethics. In the last year, GSMA launched its AI Ethics Playbook, Google.org highlighted responsible AI in its $25 million investment (AI for the Global Goals), and UNESCO adopted the world’s first global agreement on Ethics of AI. This focus on ethics is well-founded. Applications of AI from credit scoring and medical diagnostics to facial recognition have been fraught with biased outcomes, opaque processes, and claims of unjust surveillance. As use of AI extends into low- and middle-income countries (LMICs), the potential for similar harm is high.
One such risk in predictive technologies is the unintended introduction of bias. The phenomenon of algorithmic bias, for instance, shows that systematic differences in the data used to train machine learning models and AI-based tools readily introduce bias into their results. In the field of digital health, this dynamic can manifest through ‘health data poverty’ which refers to the inability of individuals, groups, or regions to benefit from new technologies that are built with datasets in which they are underrepresented. For example, diagnostic models for skin cancer developed predominantly from images of light-skinned male patients fail to diagnose darker skin types with similar accuracy. Bias can also be introduced by the decisions made during the design of AI models. In global development, this is made worse by the concentration of AI expertise in relatively few, high-income countries. This means AI models are often designed by individuals with little contextual understanding of the environment in which data is collected and where an AI application is likely to be used. This increases the risk of codifying bias that those more familiar with the local context could recognize and mitigate. Finally, once AI models are developed, failure to measure, report, and critically interpret model performance can also miss opportunities to reduce the negative impact of model bias.
Encouragingly, the global community is responding. Global donors are collaborating to increase availability of representative datasets for LMICs. Academic institutions, governments, private sector, and civil society organizations are developing and using principles, frameworks, toolkits, and guides. These help both developers and users of predictive models to proactively identify and mitigate risks. New challenges and funding opportunities are further incentivizing the development of AI approaches that prioritize responsible AI. As the global development community continues to apply AI in its work, the safeguards of representative data sets, inclusion of local stakeholders, and collective commitment to responsible use can help accelerate equitable progress towards achieving some of the most challenging global development goals.
Additional reading:
Managing Machine Learning Projects in International Development: A Practical Guide