- DSpace Home
- Investigaciones educativas
- Informes técnicos
Artificial intelligence in education : challenges and opportunities for sustainable development
Collections
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- My Account Login
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- Perspective
- Open access
- Published: 13 January 2020
The role of artificial intelligence in achieving the Sustainable Development Goals
- Ricardo Vinuesa ORCID: orcid.org/0000-0001-6570-5499 1 ,
- Hossein Azizpour ORCID: orcid.org/0000-0001-5211-6388 2 ,
- Iolanda Leite 2 ,
- Madeline Balaam 3 ,
- Virginia Dignum 4 ,
- Sami Domisch ORCID: orcid.org/0000-0002-8127-9335 5 ,
- Anna Felländer 6 ,
- Simone Daniela Langhans 7 , 8 ,
- Max Tegmark 9 &
- Francesco Fuso Nerini ORCID: orcid.org/0000-0002-4770-4051 10
Nature Communications volume 11 , Article number: 233 ( 2020 ) Cite this article
452k Accesses
1146 Citations
938 Altmetric
Metrics details
- Computational science
- Developing world
- Energy efficiency
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.
Similar content being viewed by others
A definition, benchmark and database of AI for social good initiatives
AI for social good: unlocking the opportunity for positive impact
Ecological footprints, carbon emissions, and energy transitions: the impact of artificial intelligence (AI)
Introduction.
The emergence of artificial intelligence (AI) is shaping an increasing range of sectors. For instance, AI is expected to affect global productivity 1 , equality and inclusion 2 , environmental outcomes 3 , and several other areas, both in the short and long term 4 . Reported potential impacts of AI indicate both positive 5 and negative 6 impacts on sustainable development. However, to date, there is no published study systematically assessing the extent to which AI might impact all aspects of sustainable development—defined in this study as the 17 Sustainable Development Goals (SDGs) and 169 targets internationally agreed in the 2030 Agenda for Sustainable Development 7 . This is a critical research gap, as we find that AI may influence the ability to meet all SDGs.
Here we present and discuss implications of how AI can either enable or inhibit the delivery of all 17 goals and 169 targets recognized in the 2030 Agenda for Sustainable Development. Relationships were characterized by the methods reported at the end of this study, which can be summarized as a consensus-based expert elicitation process, informed by previous studies aimed at mapping SDGs interlinkages 8 , 9 , 10 . A summary of the results is given in Fig. 1 and the Supplementary Data 1 provides a complete list of all the SDGs and targets, together with the detailed results from this work. Although there is no internationally agreed definition of AI, for this study we considered as AI any software technology with at least one of the following capabilities: perception—including audio, visual, textual, and tactile (e.g., face recognition), decision-making (e.g., medical diagnosis systems), prediction (e.g., weather forecast), automatic knowledge extraction and pattern recognition from data (e.g., discovery of fake news circles in social media), interactive communication (e.g., social robots or chat bots), and logical reasoning (e.g., theory development from premises). This view encompasses a large variety of subfields, including machine learning.
Documented evidence of the potential of AI acting as ( a ) an enabler or ( b ) an inhibitor on each of the SDGs. The numbers inside the colored squares represent each of the SDGs (see the Supplementary Data 1 ). The percentages on the top indicate the proportion of all targets potentially affected by AI and the ones in the inner circle of the figure correspond to proportions within each SDG. The results corresponding to the three main groups, namely Society, Economy, and Environment, are also shown in the outer circle of the figure. The results obtained when the type of evidence is taken into account are shown by the inner shaded area and the values in brackets.
Documented connections between AI and the SDGs
Our review of relevant evidence shows that AI may act as an enabler on 134 targets (79%) across all SDGs, generally through a technological improvement, which may allow to overcome certain present limitations. However, 59 targets (35%, also across all SDGs) may experience a negative impact from the development of AI. For the purpose of this study, we divide the SDGs into three categories, according to the three pillars of sustainable development, namely Society, Economy, and Environment 11 , 12 (see the Methods section). This classification allows us to provide an overview of the general areas of influence of AI. In Fig. 1 , we also provide the results obtained when weighting how appropriate is the evidence presented in each reference to assess an interlinkage to the percentage of targets assessed, as discussed in the Methods section and below. A detailed assessment of the Society, Economy, and Environment groups, together with illustrative examples, are discussed next.
AI and societal outcomes
Sixty-seven targets (82%) within the Society group could potentially benefit from AI-based technologies (Fig. 2 ). For instance, in SDG 1 on no poverty, SDG 4 on quality education, SDG 6 on clean water and sanitation, SDG 7 on affordable and clean energy, and SDG 11 on sustainable cities, AI may act as an enabler for all the targets by supporting the provision of food, health, water, and energy services to the population. It can also underpin low-carbon systems, for instance, by supporting the creation of circular economies and smart cities that efficiently use their resources 13 , 14 . For example, AI can enable smart and low-carbon cities encompassing a range of interconnected technologies such as electrical autonomous vehicles and smart appliances that can enable demand response in the electricity sector 13 , 14 (with benefits across SDGs 7, 11, and 13 on climate action). AI can also help to integrate variable renewables by enabling smart grids that partially match electrical demand to times when the sun is shining and the wind is blowing 13 . Fewer targets in the Society group can be impacted negatively by AI (31 targets, 38%) than the ones with positive impact. However, their consideration is crucial. Many of these relate to how the technological improvements enabled by AI may be implemented in countries with different cultural values and wealth. Advanced AI technology, research, and product design may require massive computational resources only available through large computing centers. These facilities have a very high energy requirement and carbon footprint 15 . For instance, cryptocurrency applications such as Bitcoin are globally using as much electricity as some nations’ electrical demand 16 , compromising outcomes in the SDG 7 sphere, but also on SDG 13 on Climate Action. Some estimates suggest that the total electricity demand of information and communications technologies (ICTs) could require up to 20% of the global electricity demand by 2030, from around 1% today 15 . Green growth of ICT technology is therefore essential 17 . More efficient cooling systems for data centers, broader energy efficiency, and renewable-energy usage in ICTs will all play a role in containing the electricity demand growth 15 . In addition to more efficient and renewable-energy-based data centers, it is essential to embed human knowledge in the development of AI models. Besides the fact that the human brain consumes much less energy than what is used to train AI models, the available knowledge introduced in the model (see, for instance, physics-informed deep learning 18 ) does not need to be learnt through data-intensive training, a fact that may significantly reduce the associated energy consumption. Although AI-enabled technology can act as a catalyst to achieve the 2030 Agenda, it may also trigger inequalities that may act as inhibitors on SDGs 1, 4, and 5. This duality is reflected in target 1.1, as AI can help to identify areas of poverty and foster international action using satellite images 5 . On the other hand, it may also lead to additional qualification requirements for any job, consequently increasing the inherent inequalities 19 and acting as an inhibitor towards the achievement of this target.
Documented evidence of positive or negative impact of AI on the achievement of each of the targets from SDGs 1, 2, 3, 4, 5, 6, 7, 11, and 16 ( https://www.un.org/sustainabledevelopment/ ). Each block in the diagram represents a target (see the Supplementary Data 1 for additional details on the targets). For targets highlighted in green or orange, we found published evidence that AI could potentially enable or inhibit such target, respectively. The absence of highlighting indicates the absence of identified evidence. It is noteworthy that this does not necessarily imply the absence of a relationship. (The content of of this figure has not been reviewed by the United Nations and does not reflect its views).
Another important drawback of AI-based developments is that they are traditionally based on the needs and values of nations in which AI is being developed. If AI technology and big data are used in regions where ethical scrutiny, transparency, and democratic control are lacking, AI might enable nationalism, hate towards minorities, and bias election outcomes 20 . The term “big nudging” has emerged to represent using big data and AI to exploit psychological weaknesses to steer decisions—creating problems such as damaging social cohesion, democratic principles, and even human rights 21 . AI has been recently utilized to develop citizen scores, which are used to control social behavior 22 . This type of score is a clear example of threat to human rights due to AI misuse and one of its biggest problems is the lack of information received by the citizens on the type of analyzed data and the consequences this may have on their lives. It is also important to note that AI technology is unevenly distributed: for instance, complex AI-enhanced agricultural equipment may not be accessible to small farmers and thus produce an increased gap with respect to larger producers in more developed economies 23 , consequently inhibiting the achievement of some targets of SDG 2 on zero hunger. There is another important shortcoming of AI in the context of SDG 5 on gender equality: there is insufficient research assessing the potential impact of technologies such as smart algorithms, image recognition, or reinforced learning on discrimination against women and minorities. For instance, machine-learning algorithms uncritically trained on regular news articles will inadvertently learn and reproduce the societal biases against women and girls, which are embedded in current languages. Word embeddings, a popular technique in natural language processing, have been found to exacerbate existing gender stereotypes 2 . In addition to the lack of diversity in datasets, another main issue is the lack of gender, racial, and ethnic diversity in the AI workforce 24 . Diversity is one of the main principles supporting innovation and societal resilience, which will become essential in a society exposed to changes associated to AI development 25 . Societal resilience is also promoted by decentralization, i.e., by the implementation of AI technologies adapted to the cultural background and the particular needs of different regions.
AI and economic outcomes
The technological advantages provided by AI may also have a positive impact on the achievement of a number of SDGs within the Economy group. We have identified benefits from AI on 42 targets (70%) from these SDGs, whereas negative impacts are reported in 20 targets (33%), as shown in Fig. 1 . Although Acemoglu and Restrepo 1 report a net positive impact of AI-enabled technologies associated to increased productivity, the literature also reflects potential negative impacts mainly related to increased inequalities 26 , 27 , 28 , 29 . In the context of the Economy group of SDGs, if future markets rely heavily on data analysis and these resources are not equally available in low- and middle- income countries, the economical gap may be significantly increased due to the newly introduced inequalities 30 , 31 significantly impacting SDGs 8 (decent work and economic growth), 9 (industry, innovation and infrastructure), and 10 (reduced inequalities). Brynjolfsson and McAfee 31 argue that AI can exacerbate inequality also within nations. By replacing old jobs with ones requiring more skills, technology disproportionately rewards the educated: since the mid 1970s, the salaries in the United States (US) salaries rose about 25% for those with graduate degrees, while the average high-school dropout took a 30% pay cut. Moreover, automation shifts corporate income to those who own companies from those who work there. Such transfer of revenue from workers to investors helps explain why, even though the combined revenues of Detroit's “Big 3” (GM, Ford, and Chrysler) in 1990 were almost identical to those of Silicon Valley's “Big 3” (Google, Apple, and Facebook) in 2014, the latter had 9 times fewer employees and were worth 30 times more on the stock market 32 . Figure 3 shows an assessment of the documented positive and negative effects on the various targets within the SDGs in the Economy group.
Documented evidence of positive or negative impact of AI on the achievement of each of the targets from SDGs 8, 9, 10, 12, and 17 ( https://www.un.org/sustainabledevelopment/ ). The interpretation of the blocks and colors is as in Fig. 2 . (The content of of this figure has not been reviewed by the United Nations and does not reflect its views).
Although the identified linkages in the Economy group are mainly positive, trade-offs cannot be neglected. For instance, AI can have a negative effect on social media usage, by showing users content specifically suited to their preconceived ideas. This may lead to political polarization 33 and affect social cohesion 21 with consequences in the context of SDG 10 on reduced inequalities. On the other hand, AI can help identify sources of inequality and conflict 34 , 35 , and therewith potentially reduce inequalities, for instance, by using simulations to assess how virtual societies may respond to changes. However, there is an underlying risk when using AI to evaluate and predict human behavior, which is the inherent bias in the data. It has been reported that a number of discriminatory challenges are faced in the automated targeting of online job advertising using AI 35 , essentially related to the previous biases in selection processes conducted by human recruiters. The work by Dalenberg 35 highlights the need of modifying the data preparation process and explicitly adapting the AI-based algorithms used for selection processes to avoid such biases.
AI and environmental outcomes
The last group of SDGs, i.e., the one related to Environment, is analyzed in Fig. 4 . The three SDGs in this group are related to climate action, life below water and life on land (SDGs 13, 14, and 15). For the Environment group, we identified 25 targets (93%) for which AI could act as an enabler. Benefits from AI could be derived by the possibility of analyzing large-scale interconnected databases to develop joint actions aimed at preserving the environment. Looking at SDG 13 on climate action, there is evidence that AI advances will support the understanding of climate change and the modeling of its possible impacts. Furthermore, AI will support low-carbon energy systems with high integration of renewable energy and energy efficiency, which are all needed to address climate change 13 , 36 , 37 . AI can also be used to help improve the health of ecosystems. The achievement of target 14.1, calling to prevent and significantly reduce marine pollution of all kinds, can benefit from AI through algorithms for automatic identification of possible oil spills 38 . Another example is target 15.3, which calls for combating desertification and restoring degraded land and soil. According to Mohamadi et al. 39 , neural networks and objective-oriented techniques can be used to improve the classification of vegetation cover types based on satellite images, with the possibility of processing large amounts of images in a relatively short time. These AI techniques can help to identify desertification trends over large areas, information that is relevant for environmental planning, decision-making, and management to avoid further desertification, or help reverse trends by identifying the major drivers. However, as pointed out above, efforts to achieve SDG 13 on climate action could be undermined by the high-energy needs for AI applications, especially if non carbon-neutral energy sources are used. Furthermore, despite the many examples of how AI is increasingly applied to improve biodiversity monitoring and conservation 40 , it can be conjectured that an increased access to AI-related information of ecosystems may drive over-exploitation of resources, although such misuse has so far not been sufficiently documented. This aspect is further discussed below, where currently identified gaps in AI research are considered.
Documented evidence of positive or negative impact of AI on the achievement of each of the targets from SDGs 13, 14, and 15 ( https://www.un.org/sustainabledevelopment/ ). The interpretation of the blocks and colors is as in Fig. 2 . (The content of of this figure has not been reviewed by the United Nations and does not reflect its views).
An assessment of the collected evidence on the interlinkages
A deeper analysis of the gathered evidence was undertaken as shown in Fig. 1 (and explained in the Methods section). In practice, each interlinkage was weighted based on the applicability and appropriateness of each of the references to assess a specific interlinkage—and possibly identify research gaps. Although accounting for the type of evidence has a relatively small effect on the positive impacts (we see a reduction of positively affected targets from 79% to 71%), we observe a more significant reduction (from 35% to 23%) in the targets with negative impact of AI. This can be partly due the fact that AI research typically involves quantitative methods that would bias the results towards the positive effects. However, there are some differences across the Society, Economy and Environment spheres. In the Society sphere, when weighting the appropriateness of evidence, positively affected targets diminish by 5 percentage points (p.p.) and negatively affected targets by 13 p.p. In particular, weighting the appropriateness of evidence on negative impacts on SDG 1 (on no poverty) and SDG 6 (on clean water and sanitation) reduces the fraction of affected targets by 43 p.p. and 35 p.p., respectively. In the Economy group instead, positive impacts are reduced more (15 p.p.) than negative ones (10 p.p.) when taking into account the appropriateness of the found evidence to speak of the issues. This can be related to the extensive study in literature assessing the displacement of jobs due to AI (because of clear policy and societal concerns), but overall the longer-term benefits of AI on the economy are perhaps not so extensively characterized by currently available methods. Finally, although the weighting of evidence decreases the positive impacts of AI on the Environment group only by 8 p.p., the negative impacts see the largest average reduction (18 p.p.). This is explained by the fact that, although there are some indications of the potential negative impact of AI on this SDG, there is no strong evidence (in any of the targets) supporting this claim, and therefore this is a relevant area for future research.
In general, the fact that the evidence on interlinkages between AI and the large majority of targets is not based on tailored analyses and tools to refer to that particular issue provides a strong rationale to address a number of research gaps, which are identified and listed in the section below.
Research gaps on the role of AI in sustainable development
The more we enable SDGs by deploying AI applications, from autonomous vehicles 41 to AI-powered healthcare solutions 42 and smart electrical grids 13 , the more important it becomes to invest in the AI safety research needed to keep these systems robust and beneficial, so as to prevent them from malfunctioning, or from getting hacked 43 . A crucial research venue for a safe integration of AI is understanding catastrophes, which can be enabled by a systemic fault in AI technology. For instance, a recent World Economic Forum (WEF) report raises such a concern due to the integration of AI in the financial sector 44 . It is therefore very important to raise awareness on the risks associated to possible failures of AI systems in a society progressively more dependent on this technology. Furthermore, although we were able to find numerous studies suggesting that AI can potentially serve as an enabler for many SDG targets and indicators, a significant fraction of these studies have been conducted in controlled laboratory environments, based on limited datasets or using prototypes 45 , 46 , 47 . Hence, extrapolating this information to evaluate the real-world effects often remains a challenge. This is particularly true when measuring the impact of AI across broader scales, both temporally and spatially. We acknowledge that conducting controlled experimental trials for evaluating real-world impacts of AI can result in depicting a snapshot situation, where AI tools are tailored towards that specific environment. However, as society is constantly changing (also due to factors including non-AI-based technological advances), the requirements set for AI are changing as well, resulting in a feedback loop with interactions between society and AI. Another underemphasized aspect in existing literature is the resilience of the society towards AI-enabled changes. Therefore, novel methodologies are required to ensure that the impact of new technologies are assessed from the points of view of efficiency, ethics, and sustainability, prior to launching large-scale AI deployments. In this sense, research aimed at obtaining insight on the reasons for failure of AI systems, introducing combined human–machine analysis tools 48 , are an essential step towards accountable AI technology, given the large risk associated to such a failure.
Although we found more published evidence of AI serving as an enabler than as an inhibitor on the SDGs, there are at least two important aspects that should be considered. First, self-interest can be expected to bias the AI research community and industry towards publishing positive results. Second, discovering detrimental aspects of AI may require longer-term studies and, as mentioned above, there are not many established evaluation methodologies available to do so. Bias towards publishing positive results is particularly apparent in the SDGs corresponding to the Environment group. A good example of this bias is target 14.5 on conserving coastal and marine areas, where machine-learning algorithms can provide optimum solutions given a wide range of parameters regarding the best choice of areas to include in conservation networks 49 . However, even if the solutions are optimal from a mathematical point of view (given a certain range of selected parameters), additional research would be needed to assess the long-term impact of such algorithms on equity and fairness 6 , precisely because of the unknown factors that may come into play. Regarding the second point stated above, it is likely that the AI projects with the highest potential of maximizing profit will get funded. Without control, research on AI is expected to be directed towards AI applications where funding and commercial interests are. This may result in increased inequality 50 . Consequently, there is the risk that AI-based technologies with potential to achieve certain SDGs may not be prioritized, if their expected economic impact is not high. Furthermore, it is essential to promote the development of initiatives to assess the societal, ethical, legal, and environmental implications of new AI technologies.
Substantive research and application of AI technologies to SDGs is concerned with the development of better data-mining and machine-learning techniques for the prediction of certain events. This is the case of applications such as forecasting extreme weather conditions or predicting recidivist offender behavior. The expectation with this research is to allow the preparation and response for a wide range of events. However, there is a research gap in real-world applications of such systems, e.g., by governments (as discussed above). Institutions have a number of barriers to the adoption AI systems as part of their decision-making process, including the need of setting up measures for cybersecurity and the need to protect the privacy of citizens and their data. Both aspects have implications on human rights regarding the issues of surveillance, tracking, communication, and data storage, as well as automation of processes without rigorous ethical standards 21 . Targeting these gaps would be essential to ensure the usability and practicality of AI technologies for governments. This would also be a prerequisite for understanding long-term impacts of AI regarding its potential, while regulating its use to reduce the possible bias that can be inherent to AI 6 .
Furthermore, our research suggests that AI applications are currently biased towards SDG issues that are mainly relevant to those nations where most AI researchers live and work. For instance, many systems applying AI technologies to agriculture, e.g., to automate harvesting or optimize its timing, are located within wealthy nations. Our literature search resulted in only a handful of examples where AI technologies are applied to SDG-related issues in nations without strong AI research. Moreover, if AI technologies are designed and developed for technologically advanced environments, they have the potential to exacerbate problems in less wealthy nations (e.g., when it comes to food production). This finding leads to a substantial concern that developments in AI technologies could increase inequalities both between and within countries, in ways which counteract the overall purpose of the SDGs. We encourage researchers and funders to focus more on designing and developing AI solutions, which respond to localized problems in less wealthy nations and regions. Projects undertaking such work should ensure that solutions are not simply transferred from technology-intensive nations. Instead, they should be developed based on a deep understanding of the respective region or culture to increase the likelihood of adoption and success.
Towards sustainable AI
The great wealth that AI-powered technology has the potential to create may go mainly to those already well-off and educated, while job displacement leaves others worse off. Globally, the growing economic importance of AI may result in increased inequalities due to the unevenly distributed educational and computing resources throughout the world. Furthermore, the existing biases in the data used to train AI algorithms may result in the exacerbation of those biases, eventually leading to increased discrimination. Another related problem is the usage of AI to produce computational (commercial, political) propaganda based on big data (also defined as “big nudging”), which is spread through social media by independent AI agents with the goals of manipulating public opinion and producing political polarization 51 . Despite the fact that current scientific evidence refutes technological determinism of such fake news 51 , long-term impacts of AI are possible (although unstudied) due to a lack of robust research methods. A change of paradigm is therefore needed to promote cooperation and to limit the possibilities for control of citizen behavior through AI. The concept of Finance 4.0 has been proposed 52 as a multi-currency financial system promoting a circular economy, which is aligned with societal goals and values. Informational self-determination (in which the individual takes an active role in how their data are handled by AI systems) would be an essential aspect of such a paradigm 52 . The data intensiveness of AI applications creates another problem: the need for more and more detailed information to improve AI algorithms, which is in conflict with the need of more transparent handling and protection of personal data 53 . One area where this conflict is particularly important is healthcare: Panch et al. 54 argue that although the vast amount of personal healthcare data could lead to the development of very powerful tools for diagnosis and treatment, the numerous problems associated to data ownership and privacy call for careful policy intervention. This is also an area where more research is needed to assess the possible long-term negative consequences. All the challenges mentioned above culminate in the academic discourse about legal personality of robots 55 , which may lead to alarming narratives of technological totalitarianism.
Many of these aspects result from the interplay between technological developments on one side and requests from individuals, response from governments, as well as environmental resources and dynamics on the other. Figure 5 shows a schematic representation of these dynamics, with emphasis on the role of technology. Based on the evidence discussed above, these interactions are not currently balanced and the advent of AI has exacerbated the process. A wide range of new technologies are being developed very fast, significantly affecting the way individuals live as well as the impacts on the environment, requiring new piloting procedures from governments. The problem is that neither individuals nor governments seem to be able to follow the pace of these technological developments. This fact is illustrated by the lack of appropriate legislation to ensure the long-term viability of these new technologies. We argue that it is essential to reverse this trend. A first step in this direction is to establish adequate policy and legislation frameworks, to help direct the vast potential of AI towards the highest benefit for individuals and the environment, as well as towards the achievement of the SDGs. Regulatory oversight should be preceded by regulatory insight, where policymakers have sufficient understanding of AI challenges to be able to formulate sound policy. Developing such insight is even more urgent than oversight, as policy formulated without understanding is likely to be ineffective at best and counterproductive at worst.
Schematic representation showing the identified agents and their roles towards the development of AI. Thicker arrows indicate faster change. In this representation, technology affects individuals through technical developments, which change the way people work and interact with each other and with the environment, whereas individuals would interact with technology through new needs to be satisfied. Technology (including technology itself and its developers) affects governments through new developments that need appropriate piloting and testing. Also, technology developers affect government through lobbying and influencing decision makers. Governments provide legislation and standards to technology. The governments affect individuals through policy and legislation, and individuals would require new legislation consistent with the changing circumstances from the governments. The environment interacts with technology by providing the resources needed for technological development and is affected by the environmental impact of technology. Furthermore, the environment is affected either negatively or positively by the needs, impacts, and choices of individuals and governments, which in turn require environmental resources. Finally, the environment is also an underlying layer that provides the “planetary boundaries” to the mentioned interactions.
Although strong and connected institutions (covered by SDG 16) are needed to regulate the future of AI, we find that there is limited understanding of the potential impact of AI on institutions. Examples of the positive impacts include AI algorithms aimed at improving fraud detection 56 , 57 or assessing the possible effects of certain legislation 58 , 59 . Another concern is that data-driven approaches for policing may hinder equal access to justice because of algorithm bias, particularly towards minorities 60 . Consequently, we believe that it is imperative to develop legislation regarding transparency and accountability of AI, as well as to decide the ethical standards to which AI-based technology should be subjected to. This debate is being pushed forward by initiatives such as the IEEE (Institute of Electrical and Electronics Engineers) ethical aligned design 60 and the new EU (European Union) ethical guidelines for trustworthy AI 61 . It is noteworthy that despite the importance of an ethical, responsible, and trustworthy approach to AI development and use, in a sense, this issue is independent of the aims of the article. In other words, one can envision AI applications that improve SDG outcomes while not being fully aligned with AI ethics guidelines. We therefore recommend that AI applications that target SDGs are open and explicit about guiding ethical principles, also by indicating explicitly how they align with the existing guidelines. On the other hand, the lack of interpretability of AI, which is currently one of the challenges of AI research, adds an additional complication to the enforcement of such regulatory actions 62 . Note that this implies that AI algorithms (which are trained with data consisting of previous regulations and decisions) may act as a “mirror” reflecting biases and unfair policy. This presents an opportunity to possibly identify and correct certain errors in the existing procedures. The friction between the uptake of data-driven AI applications and the need of protecting the privacy and security of the individuals is stark. When not properly regulated, the vast amount of data produced by citizens might potentially be used to influence consumer opinion towards a certain product or political cause 51 .
AI applications that have positive societal welfare implications may not always benefit each individual separately 41 . This inherent dilemma of collective vs. individual benefit is relevant in the scope of AI applications but is not one that should be solved by the application of AI itself. This has always been an issue affecting humankind and it cannot be solved in a simple way, since such a solution requires participation of all involved stakeholders. The dynamicity of context and the level of abstraction at which human values are described imply that there is not a single ethical theory that holds all the time in all situations 63 . Consequently, a single set of utilitarian ethical principles with AI would not be recommendable due to the high complexity of our societies 52 . It is also essential to be aware of the potential complexity in the interaction between human and AI agents, and of the increasing need for ethics-driven legislation and certification mechanisms for AI systems. This is true for all AI applications, but especially those that, if they became uncontrolled, could have even catastrophic effects on humanity, such as autonomous weapons. Regarding the latter, associations of AI and robotics experts are already getting together to call for legislation and limitations of their use 64 . Furthermore, associations such as the Future of Life Institute are reviewing and collecting policy actions and shared principles around the world to monitor progress towards sustainable-development-friendly AI 65 . To deal with the ethical dilemmas raised above, it is important that all applications provide openness about the choices and decisions made during design, development, and use, including information about the provenance and governance of the data used for training algorithms, and about whether and how they align with existing AI guidelines. It is therefore important to adopt decentralized AI approaches for a more equitable development of AI 66 .
We are at a critical turning point for the future of AI. A global and science-driven debate to develop shared principles and legislation among nations and cultures is necessary to shape a future in which AI positively contributes to the achievement of all the SDGs. The current choices to develop a sustainable-development-friendly AI by 2030 have the potential to unlock benefits that could go far-beyond the SDGs within our century. All actors in all nations should be represented in this dialogue, to ensure that no one is left behind. On the other hand, postponing or not having such a conversation could result in an unequal and unsustainable AI-fueled future.
In this section we describe the process employed to obtain the results described in the present study and shown in the Supplementary Data 1 . The goal was to answer the question “Is there published evidence of AI acting as an enabler or an inhibitor for this particular target?” for each of the 169 targets within the 17 SDGs. To this end, we conducted a consensus-based expert elicitation process, informed by previous studies on mapping SDGs interlinkages 8 , 9 and following Butler et al. 67 and Morgan 68 . The authors of this study are academics spanning a wide range of disciplines, including engineering, natural and social sciences, and acted as experts for the elicitation process. The authors performed an expert-driven literature search to support the identified connections between AI and the various targets, where the following sources of information were considered as acceptable evidence: published work on real-world applications (given the quality variation depending on the venue, we ensured that the publications considered in the analysis were of sufficient quality); published evidence on controlled/laboratory scenarios (given the quality variation depending on the venue, we ensured that the publications considered in the analysis were of sufficient quality); reports from accredited organizations (for instance: UN or government bodies); and documented commercial-stage applications. On the other hand, the following sources of information were not considered as acceptable evidence: educated conjectures, real-world applications without peer-reviewed research; media, public beliefs or other sources of information.
The expert elicitation process was conducted as follows: each of the SDGs was assigned to one or more main contributors, and in some cases to several additional contributors as summarized in the Supplementary Data 1 (here the initials correspond to the author names). The main contributors carried out a first literature search for that SDG and then the additional contributors completed the main analysis. One published study on a synergy or a trade-off between a target and AI was considered enough for mapping the interlinkage. However, for nearly all targets several references are provided. After the analysis of a certain SDG was concluded by the contributors, a reviewer was assigned to evaluate the connections and reasoning presented by the contributors. The reviewer was not part of the first analysis and we tried to assign the roles of the main contributor and reviewer to experts with complementary competences for each of the SDGs. The role of the reviewer was to bring up additional points of view and considerations, while critically assessing the analysis. Then, the main contributors and reviewers iteratively discussed to improve the results presented for each of the SDGs until the analysis for all the SDGs was sufficiently refined.
After reaching consensus regarding the assessment shown in the Supplementary Data 1 , we analyzed the results by evaluating the number of targets for which AI may act as an enabler or an inhibitor, and calculated the percentage of targets with positive and negative impact of AI for each of the 17 goals, as shown in Fig. 1 . In addition, we divided the SDGs into the three following categories: Society, Economy, and Environment, consistent with the classification discussed by Refs. 11 , 12 . The SDGs assigned to each of the categories are shown in Fig. 6 and the individual results from each of these groups can be observed in Figs. 2 – 4 . These figures indicate, for each target within each SDG, whether any published evidence of positive or negative impact was found.
(The content of this figure has not been reviewed by the United Nations and does not reflect its views).
Taking into account the types of evidence
In the methodology described above, a connection between AI and a certain target is established if at least one reference documenting such a link was found. As the analyzed studies rely on very different types of evidence, it is important to classify the references based on the methods employed to support their conclusions. Therefore, all the references in the Supplementary Data 1 include a classification from (A) to (D) according to the following criteria:
References using sophisticated tools and data to refer to this particular issue and with the possibility to be generalized are of type (A).
Studies based on data to refer to this particular issue, but with limited generalizability, are of type (B).
Anecdotal qualitative studies and methods are of type (C) .
Purely theoretical or speculative references are of type (D).
The various classes were assigned following the same expert elicitation process described above. Then, the contribution of these references towards the linkages is weighted and categories (A), (B), (C), and (D) are assigned relative weights of 1, 0.75, 0.5, and 0.25, respectively. It is noteworthy that, given the vast range of studies on all the SDG areas, the literature search was not exhaustive and, therefore, certain targets are related to more references than others in our study. To avoid any bias associated to the different amounts of references in the various targets, we considered the largest positive and negative weight to establish the connection with each target. Let us consider the following example: for a certain target, one reference of type (B) documents a positive connection and two references of types (A) and (D) document a negative connection with AI. In this case, the potential positive impact of AI on that target will be assessed with 0.75, while the potential negative impact is 1.
Limitations of the research
The presented analysis represents the perspective of the authors. Some literature on how AI might affect certain SDGs could have been missed by the authors or there might not be published evidence yet on such interlinkage. Nevertheless, the employed methods tried to minimize the subjectivity of the assessment. How AI might affect the delivery of each SDG was assessed and reviewed by several authors and a number of studies were reviewed for each interlinkage. Furthermore, as discussed in the Methods section, each interlinkage was discussed among a subset of authors until consensus was reached on its nature.
Finally, this study relies on the analysis of the SDGs. The SDGs provide a powerful lens for looking at internationally agreed goals on sustainable development and present a leap forward compared with the Millenium Development Goals in the representation of all spheres of sustainable development, encompassing human rights 69 , social sustainability, environmental outcomes, and economic development. However, the SDGs are a political compromise and might be limited in the representation of some of the complex dynamics and cross-interactions among targets. Therefore, the SDGs have to be considered in conjunction with previous and current, and other international agreements 9 . For instance, as pointed out in a recent work by UN Human Rights 69 , human rights considerations are highly embedded in the SDGs. Nevertheless, the SDGs should be considered as a complement, rather than a replacement, of the United Nations Universal Human Rights Charter 70 .
Data availability
The authors declare that all the data supporting the findings of this study are available within the paper and its Supplementary Data 1 file .
Acemoglu, D. & Restrepo, P. Artificial Intelligence, Automation, and Work. NBER Working Paper No. 24196 (National Bereau of Economic Research, 2018).
Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V. & Kalai, A. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Adv. Neural Inf. Process. Syst. 29 , 4349–4357 (2016).
Google Scholar
Norouzzadeh, M. S. et al. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proc. Natl Acad. Sci. USA 115 , E5716–E5725 (2018).
Article CAS Google Scholar
Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence (Random House Audio Publishing Group, 2017).
Jean, N. et al. Combining satellite imagery and machine learning to predict poverty. Science (80-.) 353 , 790–794 (2016).
Article ADS CAS Google Scholar
Courtland, R. Bias detectives: the researchers striving to make algorithms fair. Nature 558 , 357–360 (2018).
UN General Assembly (UNGA). A/RES/70/1Transforming our world: the 2030 Agenda for Sustainable Development. Resolut 25 , 1–35 (2015).
Fuso Nerini, F. et al. Mapping synergies and trade-offs between energy and the Sustainable Development Goals. Nat. Energy 3 , 10–15 https://doi.org/10.1038/s41560-017-0036-5 (2017).
Article ADS Google Scholar
Fuso Nerini, F. et al. Connecting climate action with other Sustainable Development Goals. Nat. Sustain . 1 , 674–680 (2019). https://doi.org/10.1038/s41893-019-0334-y
Article Google Scholar
Fuso Nerini, F. et al. Use SDGs to guide climate action. Nature 557 , https://doi.org/10.1038/d41586-018-05007-1 (2018).
United Nations Economic and Social Council. Sustainable Development (United Nations Economic and Social Council, 2019).
Stockholm Resilience Centre’s (SRC) contribution to the 2016 Swedish 2030 Agenda HLPF report (Stockholm University, 2017).
International Energy Agency. Digitalization & Energy (International Energy Agency, 2017).
Fuso Nerini, F. et al. A research and innovation agenda for zero-emission European cities. Sustainability 11 , 1692 https://doi.org/10.3390/su11061692 (2019).
Jones, N. How to stop data centres from gobbling up the world’s electricity. Nature 561 , 163–166 (2018).
Truby, J. Decarbonizing Bitcoin: law and policy choices for reducing the energy consumption of Blockchain technologies and digital currencies. Energy Res. Soc. Sci. 44 , 399–410 (2018).
Ahmad Karnama, Ehsan Bitaraf Haghighi, Ricardo Vinuesa, (2019) Organic data centers: A sustainable solution for computing facilities. Results in Engineering 4:100063
Raissi, M., Perdikaris, P. & Karniadakis, G. E. Physics informed deep learning (part I): data-driven solutions of nonlinear partial differential equations. arXiv:1711.10561 (2017).
Nagano, A. Economic growth and automation risks in developing countries due to the transition toward digital modernity. Proc. 11th International Conference on Theory and Practice of Electronic Governance—ICEGOV ’18 (2018). https://doi.org/10.1145/3209415.3209442
Helbing, D. & Pournaras, E. Society: build digital democracy. Nature 527 , 33–34 (2015).
Helbing, D. et al. in Towards Digital Enlightenment 73–98 (Springer International Publishing, 2019). https://doi.org/10.1007/978-3-319-90869-4_7
Nagler, J., van den Hoven, J. & Helbing, D. in Towards Digital Enlightenment 41–46 (Springer International Publishing, 2019). https://doi.org/10.1007/978-3-319-90869-4_5
Wegren, S. K. The “left behind”: smallholders in contemporary Russian agriculture. J. Agrar. Chang. 18 , 913–925 (2018).
NSF - National Science Foundation. Women and Minorities in the S&E Workforce (NSF - National Science Foundation, 2018).
Helbing, D. The automation of society is next how to survive the digital revolution; version 1.0 (Createspace, 2015).
Cockburn, I., Henderson, R. & Stern, S. The Impact of Artificial Intelligence on Innovation (NBER, 2018). https://doi.org/10.3386/w24449
Seo, Y., Kim, S., Kisi, O. & Singh, V. P. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques. J. Hydrol. 520 , 224–243 (2015).
Adeli, H. & Jiang, X. Intelligent Infrastructure: Neural Networks, Wavelets, and Chaos Theory for Intelligent Transportation Systems and Smart Structures (CRC Press, 2008).
Nunes, I. & Jannach, D. A systematic review and taxonomy of explanations in decision support and recommender systems. Use. Model Use. Adapt Interact. 27 , 393–444 (2017).
Bissio, R. Vector of hope, source of fear. Spotlight Sustain. Dev . 77–86 (2018).
Brynjolfsson, E. & McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W. W. Norton & Company, 2014).
Dobbs, R. et al. Poorer Than Their Parents? Flat or Falling Incomes in Advanced Economies (McKinsey Global Institute, 2016).
Francescato, D. Globalization, artificial intelligence, social networks and political polarization: new challenges for community psychologists. Commun. Psychol. Glob. Perspect. 4 , 20–41 (2018).
Saam, N. J. & Harrer, A. Simulating norms, social inequality, and functional change in artificial societies. J. Artificial Soc.Social Simul . 2 (1999).
Dalenberg, D. J. Preventing discrimination in the automated targeting of job advertisements. Comput. Law Secur. Rev. 34 , 615–627 (2018).
World Economic Forum (WEF). Fourth Industrial Revolution for the Earth Series Harnessing Artificial Intelligence for the Earth (World Economic Forum, 2018).
Vinuesa, R., Fdez. De Arévalo, L., Luna, M. & Cachafeiro, H. Simulations and experiments of heat loss from a parabolic trough absorber tube over a range of pressures and gas compositions in the vacuum chamber. J. Renew. Sustain. Energy 8 (2016).
Keramitsoglou, I., Cartalis, C. & Kiranoudis, C. T. Automatic identification of oil spills on satellite images. Environ. Model. Softw. 21 , 640–652 (2006).
Mohamadi, A., Heidarizadi, Z. & Nourollahi, H. Assessing the desertification trend using neural network classification and object-oriented techniques. J. Fac. Istanb. Univ. 66 , 683–690 (2016).
Kwok, R. AI empowers conservation biology. Nature 567 , 133–134 (2019).
Bonnefon, J.-F., Shariff, A. & Rahwan, I. The social dilemma of autonomous vehicles. Science 352 , 1573–1576 (2016).
De Fauw, J. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med 24 , 1342–1350 (2018).
Russell, S., Dewey, D. & Tegmark, M. Research priorities for robust and beneficial artificial intelligence. AI Mag. 34 , 105–114 (2015).
World Economic Forum (WEF). The New Physics of Financial Services – How Artificial Intelligence is Transforming the Financial Ecosystem (World Economic Forum, 2018).
Gandhi, N., Armstrong, L. J. & Nandawadekar, M. Application of data mining techniques for predicting rice crop yield in semi-arid climatic zone of India. 2017 IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR) (2017). https://doi.org/10.1109/tiar.2017.8273697
Esteva, A. et al. Corrigendum: dermatologist-level classification of skin cancer with deep neural networks. Nature 546 , 686 (2017).
Cao, Y., Li, Y., Coleman, S., Belatreche, A. & McGinnity, T. M. Detecting price manipulation in the financial market. 2014 IEEE Conference on Computational Intelligence for Financial Engineering & Economics (CIFEr) (2014). https://doi.org/10.1109/cifer.2014.6924057
Nushi, B., Kamar, E. & Horvitz, E. Towards accountable AI: hybrid human-machine analyses for characterizing system failure. arXiv:1809.07424 (2018).
Beyer, H. L., Dujardin, Y., Watts, M. E. & Possingham, H. P. Solving conservation planning problems with integer linear programming. Ecol. Model. 328 , 14–22 (2016).
Whittaker, M. et al. AI Now Report 2018 (AI Now Institute, 2018).
Petit, M. Towards a critique of algorithmic reason. A state-of-the-art review of artificial intelligence, its influence on politics and its regulation. Quad. del CAC 44 (2018).
Scholz, R. et al. Unintended side effects of the digital transition: European scientists’ messages from a proposition-based expert round table. Sustainability 10 , 2001 (2018).
Ramirez, E., Brill, J., Maureen, K., Wright, J. D. & McSweeny, T. Data Brokers: A Call for Transparency and Accountability (Federal Trade Commission, 2014).
Panch, T., Mattie, H. & Celi, L. A. The “inconvenient truth” about AI in healthcare. npj Digit. Med 2 , 77 (2019).
Solaiman, S. M. Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artif. Intell. Law 25 , 155–179 (2017).
West, J. & Bhattacharya, M. Intelligent financial fraud detection: a comprehensive review. Comput. Secur 57 , 47–66 (2016).
Hajek, P. & Henriques, R. Mining corporate annual reports for intelligent detection of financial statement fraud – A comparative study of machine learning methods. Knowl.-Based Syst. 128 , 139–152 (2017).
Perry, W. L., McInnis, B., Price, C. C., Smith, S. C. & Hollywood, J. S. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations (RAND Corporation, 2013).
Gorr, W. & Neill, D. B. Detecting and preventing emerging epidemics of crime. Adv. Dis. Surveillance 4 , 13 (2007).
IEEE. Ethically Aligned Design - Version II overview (2018). https://doi.org/10.1109/MCS.2018.2810458
European Commission. Draft Ethics Guidelines for Trustworthy AI (Digital Single Market, 2018).
Lipton, Z. C. The mythos of model interpretability. Commun. ACM 61 , 36–43 (2018).
Dignum, V. Responsible Artificial Intelligence (Springer International Publishing, 2019).
Future of Life Institute. Open Letter on Autonomous Weapons (Future of Life Institute, 2015).
Future of Life Institute. Annual Report 2018. https://futureoflife.org/wp-content/uploads/2019/02/2018-Annual-Report.pdf?x51579
Montes, G. A. & Goertzel, B. Distributed, decentralized, and democratized artificial intelligence. Technol. Forecast. Soc. Change 141 , 354–358 (2019).
Butler, A. J., Thomas, M. K. & Pintar, K. D. M. Systematic review of expert elicitation methods as a tool for source attribution of enteric illness. Foodborne Pathog. Dis. 12 , 367–382 (2015).
Morgan, M. G. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc. Natl Acad. Sci. USA 111 , 7176–7184 (2014).
United Nations Human Rights. Sustainable Development Goals Related Human Rights (United Nations Human Rights, 2016).
Draft Committee. Universal Declaration of Human Rights (United Nations, 1948).
Download references
Acknowledgements
R.V. acknowledges funding provided by KTH Sustainability Office. I.L. acknowledges the Swedish Research Council (registration number 2017-05189) and funding through an Early Career Research Fellowship granted by the Jacobs Foundation. M.B. acknowledges Implicit SSF: Swedish Foundation for Strategic Research project RIT15-0046. V.D. acknowledges the support of the Wallenberg AI, Autonomous Systems, and Software Program (WASP) program funded by the Knut and Alice Wallenberg Foundation. S.D. acknowledges funding from the Leibniz Competition (J45/2018). S.L. acknowledges funding from the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska–Curie grant agreement number 748625. M.T. was supported by the Ethics and Governance of AI Fund. F.F.N. acknowledges funding from the Formas grant number 2018-01253.
Author information
Authors and affiliations.
Linné FLOW Centre, KTH Mechanics, SE-100 44, Stockholm, Sweden
Ricardo Vinuesa
Division of Robotics, Perception, and Learning, School of EECS, KTH Royal Institute Of Technology, Stockholm, Sweden
Hossein Azizpour & Iolanda Leite
Division of Media Technology and Interaction Design, KTH Royal Institute of Technology, Lindstedtsvägen 3, Stockholm, Sweden
Madeline Balaam
Responsible AI Group, Department of Computing Sciences, Umeå University, SE-90358, Umeå, Sweden
Virginia Dignum
Leibniz-Institute of Freshwater Ecology and Inland Fisheries, Müggelseedamm 310, 12587, Berlin, Germany
Sami Domisch
AI Sustainability Center, SE-114 34, Stockholm, Sweden
Anna Felländer
Basque Centre for Climate Change (BC3), 48940, Leioa, Spain
Simone Daniela Langhans
Department of Zoology, University of Otago, 340 Great King Street, 9016, Dunedin, New Zealand
Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA
Max Tegmark
Unit of Energy Systems Analysis (dESA), KTH Royal Institute of Technology, Brinellvagen, 68SE-1004, Stockholm, Sweden
Francesco Fuso Nerini
You can also search for this author in PubMed Google Scholar
Contributions
R.V. and F.F.N. ideated, designed, and wrote the paper; they also coordinated inputs from the other authors, and assessed and reviewed SDG evaluations as for the Supplementary Data 1 . H.A. and I.L. supported the design, wrote, and reviewed sections of the paper; they also assessed and reviewed SDG evaluations as for the Supplementary Data 1 . M.B., V.D., S.D., A.F. and S.L. wrote and reviewed sections of the paper; they also assessed and reviewed SDG evaluations as for the Supplementary Data 1 . M.T. reviewed the paper and acted as final editor.
Corresponding authors
Correspondence to Ricardo Vinuesa or Francesco Fuso Nerini .
Ethics declarations
Competing interests.
The authors declare no competing interests.
Additional information
Peer review information Nature Communications thanks Dirk Helbing and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Description of additional supplementary files, supplementary data 1, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
About this article
Cite this article.
Vinuesa, R., Azizpour, H., Leite, I. et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun 11 , 233 (2020). https://doi.org/10.1038/s41467-019-14108-y
Download citation
Received : 03 May 2019
Accepted : 16 December 2019
Published : 13 January 2020
DOI : https://doi.org/10.1038/s41467-019-14108-y
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
This article is cited by
A systematic review of current ai techniques used in the context of the sdgs.
- Lucas Greif
- Fabian Röckel
- Jivka Ovtcharova
International Journal of Environmental Research (2025)
Assessing the current landscape of AI and sustainability literature: identifying key trends, addressing gaps and challenges
- Shailesh Tripathi
- Nadine Bachmann
- Herbert Jodlbauer
Journal of Big Data (2024)
Navigating the digital world: development of an evidence-based digital literacy program and assessment tool for youth
- M. Claire Buchan
- Jasmin Bhawra
- Tarun Reddy Katapally
Smart Learning Environments (2024)
Green and sustainable AI research: an integrated thematic and topic modeling analysis
- Raghu Raman
- Debidutta Pattnaik
- Prema Nedungadi
Artificial Intelligence can help Loss and Damage only if it is inclusive and accessible
- Francesca Larosa
- Adam Wickberg
npj Climate Action (2024)
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.
Artificial intelligence in education: challenges and opportunities for sustainable development
This working paper, written for education policymakers, anticipates the extent to which AI affects the education sector to allow for informed and appropriate policy responses.
In education, AI has begun producing new teaching and learning solutions that are now undergoing testing in different contexts. This paper gathers examples of the introduction of AI in education worldwide, particularly in developing countries, discussions in the context of the 2019 Mobile Learning Week and beyond, as part of the multiple ways to accomplish Sustainable Development Goal 4, which strives for equitable, quality education for all. It analyses how AI can be used to improve learning outcomes, presenting examples of how AI technology can help education systems use data to improve educational equity and quality in the developing world. Moreover, the paper explores the different means by which governments and educational institutions are rethinking and reworking educational programs to prepare learners for the increasing presence of AI in all aspects of human activity and addresses the challenges and policy implications that should be part of the global and local conversations regarding the possibilities and risks of introducing AI in education and preparing students for an AI-powered context.
- Opportunities
- Past Events
- Case Studies
- AI Assets Catalog
- Research Bundles
- AIoD Communication & Dissemination services
- Education Resources
- Contribution Gateway
- About the AIoD Platform
- AIoD Newsletter
- Publications
Artificial Intelligence for Sustainable Development: Synthesis Report
This report is a collaborative work accomplished by a group of UNESCO specialists and experts. Borhene Chakroun, Director of the Division for Policies and Lifelong Learning Systems, Fengchun Miao, Chief of the Unit for ICT in Education, Education Sector, and Valtencir Mendes, Project Officer, Unit for ICT in Education, Education Sector, provided overall guidance and direction to the planning and content.
UNESCO education colleagues, particularly Anett Domiter, Huhua Fan and Iaroslava Kharkova, provided additional input. Wayne Holmes from the Open University prepared the report based on the notes taken during Mobile Learning Week 2019 by Dominic Orr, Mitja Jermol, Kim Issroff, Jonghwi Park, Keith Holmes, Helen Crompton, Paz Portales, Davor Orlic, Sandra Rodriguez, Anahat Kaur and Noam Assouline. We acknowledge the support of UNESCO interns Barbara Dziubak, Caterina Ferrara Ruiz, Samuel Grimonprez and Shutong Wang. We also acknowledge the artistic skills of the Profuturo Foundation team, who contributed to the design of the infographic used to illustrate the ‘Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development’ working paper, and used in this publication.
UNESCO would like to especially thank the International Telecommunication Union (ITU) , Profuturo Foundation and Skillogs for acting as partners in Mobile Learning Week 2019, and extend sincere gratitude to the speakers and participants from across the globe, comprised of representatives of international organizations, government officials, academic experts and industry practitioners in the field of ICT in education.
During the five-day event for Mobile Learning Week 2019, UNESCO gathered participants from around the world to share experiences, initiatives and plan joint actions with a view to harnessing artificial intelligence (AI) to achieve Sustainable Development Goal (SDG) 4. The report stems from the work that the speakers presented and the insights that all the participants shared at the event.Supported byIn partnership withSponsored byAI-Powered learningCommitted to education.Committed to education.
Publication year: 2019
Artificial Intelligence in Higher Education: Challenges and Opportunities’
https://repository.uel.ac.uk/item/8x0w8
Log in to edit
Download files
Publisher's version, related outputs, strengthening economic cooperation: cooperative economics theory may have the answers.
Evaluating Consumer’s Behaviour Towards Investing in Sustainable Luxury Real Estate
Challenges and Prospects of Private Higher Education in Nigeria
The Determinants of Foreign-Direct-Investment (FDI) Inflows in Nigeria
Determinants of Foreign-Direct-Investment (FDI) in Nigeria
Best Practices During Covid-19 With A Significant Focus On Online Teaching: A Case Of Private HEI
The Impact of Foreign Direct Investment on the Utilisation of Natural Resources in Nigeria
Artificial intelligence and education and skills
As AI rapidly advances, it is crucial to understand how education will be affected. Important questions to consider include: How can AI be compared to humans? How do AI systems perform tasks from various capability domains such as language, reasoning, sensorimotor or social interaction domains? What are the implications for education and training?
- Putting AI to the test : How does the performance of GPT and 15-year-old students in PISA compare?
- Is Education Losing the Race with Technology?: AI's Progress in Maths and Reading
Select a language
Key messages, we need robust measures of ai capabilities.
Understanding how AI can affect the economy and society – and the education system that prepares students for both – requires an understanding of the capabilities of this technology and their development trajectory. Moreover, AI capabilities need to be compared to human skills to understand where AI can replace humans and where it can complement them. This knowledge base will help predict which tasks AI may automate and, consequently, how AI may shift the demand for skills. Policy makers can use this information to reshape education systems in accordance with future skills needs and to develop tailored labour-market policies.
We need to rethink education in light of developing AI capabilities
As AI rapidly advances, it's becoming evident that it is starting to outpace humans in critical areas such as reading, mathematics and scientific reasoning. This prompts us to reconsider our educational approach. We must determine which skills to prioritise, which to phase out, and where to place greater emphasis in an AI-influenced world. We need to anticipate how learning methods and teaching practices will evolve. At the same time, more profound questions about the overall goals of education are emerging as the cognitive, physical and social capabilities of AI continue to rise.
We need to encourage research on using generative AI and promote forward-looking guidance
Countries should encourage collaborative research on effective and equitable use of generative AI in the teaching and learning process to inform forward-looking guidance and dedicated training programmes. Monitoring impact and sharing (international) best practices across researchers, developers, and education stakeholders will help cast light on the multiple benefits of generative AI, as well as on its limitations, allowing innovation and improvements while mitigating pitfalls.
More facts, key findings and policy recommendations
How do GPT and student performance in PISA compare?
The performance of GPT – the AI system behind OpenAI Chatbot ChatGPT – on reading and science is at a higher level than students. GPT-3.5 (released in November 2022) can solve 73% of the reading test questions and 66% of the science questions, while GPT-4 (a more powerful version released in March 2023) scores at 85% and 84%, respectively. In contrast, GPT-3.5 and GPT-4 mathematical capability proved to be still below that of students. Moreover, GPT-4, released only a couple of months after its predecessor, performs at a substantially higher level for each capability .
GPT and student performance on PISA core domain
Evolution of a written task: exploring human-AI synergies
With AI’s rapid growth, tasks humans perform today are likely to change in the future. In consequence, exploring the way humans use, rely on or collaborate with AIs is necessary to adjust the way we should rethink education systems in light of AI capabilities. The recent advent of ChatGPT gives us a concrete example on how written tasks can evolve with AI and its increasing capabilities.
Related publications
Programmes and projects
- Artificial Intelligence and the Future of Skills Artificial Intelligence (AI) and robotics are becoming increasingly sophisticated at replicating human skills. The evolution of these technologies could fundamentally transform work over coming decades and deeply affect education’s current role in providing skills and preparing learners for future work. Learn more
- Smart Data and Digital Technology in Education: Artificial Intelligence, Learning Analytics and Beyond Data and digital technologies are among the most powerful drivers of innovation in education, offering a broad range of opportunities for system and school management, as well as for teaching and learning. But they also create new policy issues as countries face challenges to reap the benefits of digitalisation in education while minimising its risks. Learn more
- Career Readiness The OECD Career Readiness project is designed to provide new advice to governments, schools, employers and other stakeholders on how to best prepare young people to compete in an ever-changing labour market. Learn more
- CERI The Centre for Educational Research and Innovation (CERI) provides and promotes international comparative research, innovation and key indicators, explores forward-looking and innovative approaches to education and learning, and facilitates bridges between educational research, innovation and policy development. Learn more
- Education Policy Outlook The Education Policy Outlook is an analytical observatory that monitors the evolution of policy priorities and policy developments from early childhood education to adult education, mainly among OECD education systems, to provide a comparative understanding of how policies are evolving, and how they can be best implemented or improved over time. Learn more
- Education and Skills Policy Programme The OECD’s programme on education and skills policy support policymakers in their efforts to achieve high-quality lifelong learning, which in turn contributes to personal development, sustainable economic growth, and social cohesion. Learn more
- Education for Inclusive Societies Education for Inclusive Societies Project is designed to respond to the increasing diversity that characterises education systems, and seeks to help governments and relevant stakeholders achieve more equitable and inclusive education systems as a pillar to create more inclusive societies. Learn more
- Future of Education and Skills 2030 OECD Future of Education and Skills 2030 aims to build a common understanding of the knowledge, skills, attitudes and values students need in the 21st century. Learn more
- Higher Education Policy The Higher Education Policy Programme carries out analysis on a wide range of higher education systems and policies Learn more
- PISA Research, Development and Innovation (RDI) Programme The Research, Development and Innovation (RDI) programme established by the PISA Governing Board in 2018 explores how different areas of the assessment programme (e.g. test design, scoring methodologies) can be improved. Learn more
- Resourcing school education for the digital age Since 2013, the OECD has gathered evidence on how school resource policies work in different contexts. The focus is now on digital resources to enable countries to learn from each other in the digital transformation of their education. Learn more
- Rethinking Assessment of Social and Emotional Skills Large-scale assessments of social and emotional skills mainly use students’ self-assessments, which have some flaws in terms of comparability and, to some extend, validity and interpretability. Smaller studies are trialling more direct assessments of these skills. Work is needed to translate the innovations made in these trials and test them on larger, international scales. » A better understanding of social and emotional skills will lead to better inclusion of these skills in education. Learn more
- Schools+ Network Meeting the challenges of the 21st Century means that schools must be empowered to play a more central and active role in leading improvements in education. To support this, Schools+ will bring together major education networks to put schools at the centre of education design. Learn more
- Trends Shaping Education Preparing for the future means taking a careful look at how the world is changing. Reflecting on alternative futures helps anticipate and strategically plan for potential shocks and surprises. Learn more
Related policy issues
- Future of education and skills
- Digital education
- Innovations in education and skills
- Education research
- Trends shaping education and skills
- Digital divide in education
Artificial Intelligence in Education: A Review
Ieee account.
- Change Username/Password
- Update Address
Purchase Details
- Payment Options
- Order History
- View Purchased Documents
Profile Information
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Accessibility
- Terms of Use
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
Translations are provided via eTranslation, the European Commission's machine translation service, except for Albanian, Arabic, Armenian, Azerbaijani, Bosnian, Georgian, Macedonian, and Serbian. Please note that some content may not be available in these languages.
- slovenščina
- Azerbaijani
Artificial Intelligence in education: challenges and opportunities
Many teachers already have access to a range of AI tools to enhance teaching and learning, and to prepare students for a world shaped by AI. A huge number of tried and tested AI tools for use in the classroom can be found in this list of AI Tools and Technologies across the curriculum , crowdsourced by the participants of the EU Code Week AI Basics for Schools MOOC.
AI applications such as language learning apps, language translators, math helpers, tools for automatic transcription and subtitling or digital assistants that offer customised learning experiences are already widely used to accelerate personalised learning. AI has also shown great potential in supporting students with special needs. AI-driven solutions might fundamentally transform assessment practices by providing students with in-depth assessment and timely and focused feedback. Effective use of learning analytics enables teachers to gain a deeper insight into how their students are learning, what problems they are facing, how motivated they are, how they are feeling and how they respond to a learning situation to select appropriate teaching methods and differentiate the learning process.
Nonetheless, poor design, improper use and negative consequences of AI systems can cause irreparable harm, especially to young people. I give you two examples, related to disinformation and algorithmic bias:
Rapid advances in AI have accelerated the production of synthetic media, colloquially known as deepfakes. Deepfakes refer to algorithmic generation, manipulation and modification of audio tracks, videos, images and text for the purpose of misleading people or changing its original meaning. This technology may seem advanced and, as such, out of reach for students, but it is far from being inaccessible. For example, TikTok users can use free apps, which allow for fast and easy face swapping in videos and photos, thus spreading fake media and causing harm to their peers. I strongly believe that raising awareness of fabricated media and learning how to critically analyse the content students create and consume is nowadays more essential than ever. I invite you to check out the website entitled Which face is real , an interesting project developed to raise awareness of deepfakes and how to spot them at a single glance.
In my opinion, one of the most relevant ethical concerns that AI has raised is algorithmic bias. It refers to errors that create unfair outcomes, such as discrimination on the grounds of gender, race, ethnicity or socio-economic background. It is driven by the quality and representativeness of data, intentional or unintentional biases of humans who design AI systems and the way these AI systems are developed and deployed. An example of gender bias is a language translator making assumptions that doctors and pilots are male, while nurses and flight attendants are female. Another example is deliberately adding racist or sexist language to a chatbot so that it communicates in a disrespectful, rude and offensive way.
It is still not clear what happens in the AI ‘Black Box’ and why ‘invisible’ algorithms make certain decisions that can have a tremendously negative impact on young people, their education and consequently on their future life opportunities. The AI decision-making process needs to be transparent and explainable. Unbiased and fair decisions need to be guaranteed for all students equally. A critical approach to understanding how AI works plays a significant role in raising awareness of algorithmic bias and increasing AI’s accountability, transparency and fairness.
Arjana Blazic is a teacher trainer and instructional designer. She is a co-author of the Croatian National Curricula for English Language Teaching and the Use of ICT as a Cross-Curricular Topic. She works as an external expert for EU Code Week where she develops educational resources and teacher training opportunities.
Additional information
- Education type: School Education
- Target audience: Government staff / policy maker Head Teacher / Principal ICT Coordinator Parent / Guardian Student Teacher Teacher
- Target audience ISCED: Primary education (ISCED 1) Lower secondary education (ISCED 2) Upper secondary education (ISCED 3)
IMAGES
VIDEO
COMMENTS
programme and meeting document. W o rk in g P ap er s on E du ca tio n Po lic y 07 Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development Education Sector United Nations Educational, Scientific and Cultural OrganizationArtificial Intelligence in Education: Challenges and Opportunities for Sustainable DevelopmentPublished in 2019 by the United Nations ...
This paper focuses on the development of a curriculum for a digital and AI-powered world and the challenges faced by teachers and students in the developing world of AI. .....4 Executive Summary .....5 Introduction ..... 7 Section I: Leveraging AI towards improving learning and equity .....11 (1) AI to promote personalisation and better learning outcomes .....12 (2) Data analytics in Education ...
Examples of the introduction of AI in education worldwide, particularly in developing countries, discussions in the context of the 2019 Mobile Learning Week and beyond are gathered, as part of the multiple ways to accomplish Sustainable Development Goal 4. Artificial Intelligence is a booming technological domain capable of altering every aspect of our social interactions. In education, AI has ...
Six challenges are presented: The first challenge lies in developing a comprehensive view of public policy on AI for sustainable development. The complexity of the technological conditions needed to advance in this field require the alignment of multiple factors and institutions. Public policies have to work in partnership at international and ...
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using ...
In education, AI has begun producing new teaching and learning solutions that are now undergoing testing in different contexts. This paper gathers examples of the introduction of AI in education worldwide, particularly in developing countries, discussions in the context of the 2019 Mobile Learning Week and beyond, as part of the multiple ways to accomplish Sustainable Development Goal 4, which ...
In 2021, UNESCO released a document "Artificial intelligence (AI) and education: guidance for policy-makers," it aims to generate a shared understanding of the opportunities offered by AI for education, as well as its implications for the essential competencies required by the AI era. It can be considered as a guidebook for the development ...
Navigating the confluence of artificial intelligence and education for sustainable development in the era of industry 4.0: Challenges, opportunities, and ethical dimensions. Author links open ... system integration, decentralization, the Internet of Things (IoT), Artificial Intelligence (AI), smart factories, simulation, data analytics ...
During the five-day event for Mobile Learning Week 2019, UNESCO gathered participants from around the world to share experiences, initiatives and plan joint actions with a view to harnessing artificial intelligence (AI) to achieve Sustainable Development Goal (SDG) 4. The report stems from the work that the speakers presented and the insights ...
Sustainable Development Goals. AI = Artificial intelligence. SD = Sustainable Development. HEIs = High Education Institutions. HE = High Education. ANI = Artificial Narrow Intelligence. AGI = Artificial General Intelligence. IoT = Internet of Things. VR = Virtual Reality. UK = United Kingdom. USA = United States of America. MOOC = Massive Open ...
TL;DR: In this article, the authors provide success stories about the planned and systematic integration of technology in teaching and learning, and present models for online training at scale using massive open online courses and other platforms within the framework of the policy-technology-capacity approach to TEL implementation at the micro, meso and macro levels.
This study explored the transformative potential of generative artificial intelligence (GAI) for achieving the UN Sustainable Development Goal on Quality Education (SDG4), emphasizing its interconnectedness with the other SDGs. A proprietary algorithm and cocitation network analysis were used to identify and analyze the network of SDG features in GAI research publications (n = 1501). By ...
This paper aims to evaluate Artificial Intelligence within Higher Education, focussing on the opportunities and challenges it presents. It also investigates the educational implications of emerging technologies on the way students learn and how institutions teach and evolve. The paper gathers some examples of the introduction of AI in education ...
Our world is undergoing two transformational and concurrent transitions: the green transition and the digital and artificial intelligence (AI) transition. While both transitions hold promise to help us realize a better future and achieve national and international goals for development, they are often perceived as distinct, or even in conflict.
Artificial intelligence (AI) is rapidly opening up a new frontier in the fields of business, corporate practices, and governmental policy. The intelligence of machines and robotics with deep learning capabilities have created profound disrupting and enabling impacts on business, governments, and society. They are also influencing the larger ...
Ethics, Artificial Intelligence & Education for Sustainable Development Globally The responsible and ethical use of AI-based technologies including generative-AI in education is of paramount importance, however, the long-term impacts of these technologies are not well understood. Ethical issues and uncertainties include those related to
Artificial intelligence's (AI) capacity to transform education has acquired a lot of attention, particularly because of COVID-19. It is comprehensively changing the way traditional educational methods are applied in classrooms. The author synthesizes current research findings, examines emerging trends, and focuses on finding out the multifaceted impact of applying AI technologies to the ...
The performance of GPT - the AI system behind OpenAI Chatbot ChatGPT - on reading and science is at a higher level than students. GPT-3.5 (released in November 2022) can solve 73% of the reading test questions and 66% of the science questions, while GPT-4 (a more powerful version released in March 2023) scores at 85% and 84%, respectively.
Today's education plays a vital role in an individual and a society's development. This research have been discovering and developing the use of Artificial Intelligence (AI) in the educational field and coming up with potential applications that have not been known before. Artificial Intelligence explores the new possibilities of innovation in educational technology. Today, artificial ...
Artificial intelligence is a field of study and the resulting innovations and developments that have culminated in computers, machines, and other artifacts having human-like intelligence characterized by cognitive abilities, learning, adaptability, and decision-making capabilities. The study ascertained that AI has extensively been adopted and ...
Advancements in machine learning, natural language processing and the availability of large amounts of data, among others, have made Artificial Intelligence (AI) a major technological revolution of our time. Even though AI tools and technologies are primarily being developed for businesses and industries, AI solutions are rapidly finding their way into the classroom.
Artificial intelligence (AI) has progressively transformed from a niche technological innovation to a powerful tool capable of solving complex and multidimensional problems across various sectors (Bibri et al., 2024, Mienye and Jere, 2024a, Neethirajan, 2024).Its applications in sustainable development are promising, offering state-of-the-art changes in areas crucial for economic growth ...
AI adoption in legal education: trends, challenges and opportunities. There is a notable trend towards the implementation of AI technologies in legal education institutions, with 33% now regularly using generative AI tools (LexisNexis Citation 2024a).This integration encompasses various aspects, including AI-driven research tools, virtual assistants for student support and AI-enhanced learning ...
UNESCO published a report entitled "Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development," which highlighted the increasing role of AI in influencing students' access to education, learning performance, teaching andragogy, educational data analysis and management.
Advances in artificial intelligence (AI) and big data are re-writing the script for auditors, requiring a complete re-evaluation of what it means to be a skilled auditor.