Abstract
Debates on superintelligence preparedness have long been dominated by precautionary logics that emphasize speculative catastrophic risks. While these frameworks often present themselves as universal, they obscure justice-based perspectives and reinforce epistemic asymmetries between the Global North and South. This article interrogates the global discourse of “AI precaution” through a justice lens, drawing on qualitative analysis of policy documents and expert interviews conducted across diverse contexts, including Kenya, Brazil, India, South Africa, and the Philippines. The findings reveal three key dynamics. First, epistemic exclusion, where expertise and lived experience from the Global South are marginalized within precautionary imaginaries. Second, temporal asymmetry, as precautionary frameworks impose linear, universal timelines of risk that neglect locally situated priorities and temporalities. Third, distributive imbalance, whereby the burdens of precaution disproportionately fall on communities least responsible for the development of AI systems. To address these challenges, the paper introduces the concept of situated precaution as a pluralist alternative to dominant precautionary logics. Situated precaution foregrounds epistemic justice and temporal sovereignty, highlighting the importance of recognizing diverse knowledge systems, political contexts, and social vulnerabilities in shaping AI governance. By advancing this reframing, the article bridges critical theory, science and technology studies, and global policy debates, offering a framework for more equitable approaches to superintelligence preparedness. Ultimately, these reframing challenges dominant narratives that universalize risk while erasing alternative perspectives. It contributes to a broader rethinking of what it means to govern AI responsibly in a deeply unequal world, emphasizing the need for governance models that anticipate speculative futures while also addressing present injustices.
Keywords
Artificial Intelligence Governance, Superintelligence Preparedness, Epistemic Justice, Global South Perspectives, Situated Precaution, Science and Technology Studies
1. Introduction
Artificial intelligence (AI) has moved rapidly from speculative possibility to pressing global concern. In the past decade, discussions of “AI safety” and “superintelligence preparedness” have entered the mainstream of policy, philanthropy, and research, with organizations such as OpenAI, DeepMind, and the Future of Humanity Institute framing existential risk from advanced AI as an urgent global priority. Governments from the United States to the European Union, and more recently China, have begun to embed precautionary language into their policy roadmaps, sometimes linking AI governance directly to questions of global stability, competitiveness, and security. Within this landscape, preparedness for superintelligence has been framed as a matter not only of technological foresight but also of planetary survival.
Yet, this apparent consensus obscures deep fractures in how risks are understood, whose voices are amplified, and which futures are privileged. Existing literature and policy discourse are dominated by Global North institutions and epistemic communities, often rooted in elite universities, well-funded think tanks, and corporate labs. These actors articulate AI’s potential dangers in abstract, universalizing terms, such as “human extinction, ” “misaligned optimization, ” or “loss of control.” While these scenarios capture media attention and mobilize substantial resources, they risk sidelining the uneven, material realities of AI’s actual deployment—realities shaped by labour exploitation, environmental extraction, social inequality, and fragile infrastructures, particularly in the Global South.
The central provocation of this paper is that preparedness for AI cannot be treated as a universal, context-free exercise. Instead, it must be reframed through a justice lens that foregrounds epistemic asymmetries, sociotechnical imaginaries, and contested forms of agency. By interrogating how preparedness discourses are constructed, circulated, and localized, this paper reveals how global debates risk reproducing patterns of digital coloniality, while also pointing toward more pluralistic and accountable approaches to governance.
1.1. The Limits of Universalized Precaution
Mainstream AI safety narratives rest heavily on long-termist assumptions: the belief that the primary ethical responsibility of the present is to safeguard the distant future of humanity, potentially spanning billions of lives. This orientation, while philosophically provocative, imposes a singular temporal horizon—one where speculative future takes precedence over immediate harms. In practice, this framing tends to elevate technical and philosophical elites who claim the expertise to forecast humanity’s trajectory, while marginalizing communities grappling with the present-day consequences of AI: precarious platform labour, algorithmic surveillance, discriminatory credit scoring, or environmental degradation from resource-intensive computation.
As scholars of science and technology studies (STS) have long argued, technologies are never merely technical. They embody cultural assumptions, reproduce power hierarchies, and materialize in specific sites and practices. Treating superintelligence preparedness as a universal imperative risks erasing these dimensions, casting “humanity” as a monolithic entity while ignoring the vastly different stakes for communities across the world. A justice-oriented reframing demands that we attend not only to hypothetical future risks but also to the epistemic, social, and political struggles shaping AI governance today.
1.2. Situating Preparedness in STS and Global Governance
This study builds on several currents of STS scholarship. First, it draws from work on epistemic injustice
[17] | Chen, Y., Ni, T., Xu, W., & Gu, T. (2022). SwipePass: Acoustic-based second-factor user authentication for smartphones. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(3), 1-25. https://doi.org/10.1145/3550288 |
[25] | Elish, M. C., & boyd, d. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57-80. https://doi.org/10.1080/03637751.2017.1375130 |
[13] | Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. ISBN: 9780199678112. |
[17, 25, 13]
, which examines how some voices are systematically excluded from processes of knowledge production and governance. Second, it engages with scholarship on sociotechnical imaginaries
[33] | Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press. ISBN: 9780801497469. |
[34] | Haraway, D. J. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599. https://doi.org/10.2307/3178066 |
[33, 34]
, which explores how collective visions of technological futures shape policy, investment, and social order. Finally, it contributes to critical AI ethics and decolonial STS
[40] | Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105. https://doi.org/10.1145/3065386 |
[10] | Birhane, A., & Cummins, F. (2019). Algorithmic injustice: A relational ethics approach. Proceedings of the First ACM Conference on Fairness, Accountability and Transparency, 1-9. https://doi.org/10.1145/3287560.3287593 |
[32] | Green, B., & Viljoen, S. (2020). Algorithmic realism: Expanding the boundaries of algorithmic thought. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 19-31. https://doi.org/10.1145/3351095.3372840 |
[40, 10, 32]
, which interrogate how global AI infrastructures reproduce racialized and colonial hierarchies.
By weaving these perspectives together, this paper situates superintelligence preparedness not as a neutral, technical endeavour but as a contested sociopolitical practice. The “precautionary turn” in AI governance reflects broader dynamics of anticipation, control, and legitimation that have long preoccupied STS scholars. At the same time, preparedness discourses intersect with geopolitics, as states and corporations position themselves as responsible stewards of AI’s future, while sidelining communities whose labour and data underwrite the very systems at stake.
1.3. Research Design and Contribution
Empirically, the paper is based on 55 semi-structured interviews with policymakers, civil society leaders, industry practitioners, and researchers across seven countries: Kenya, Brazil, India, South Africa, the Philippines, the United States, and China. These interviews were complemented by document analysis of policy roadmaps, industry white papers, and civil society reports. The comparative design enables exploration of how precautionary discourses circulate globally, how they are contested in diverse contexts, and how they materialize in uneven governance structures.
From this analysis, the paper develops two conceptual contributions:
1) Situated Precaution: Preparedness cannot be reduced to universal technical protocols or abstract risk models. It must be grounded in sociopolitical and cultural contexts, attentive to immediate harms as well as speculative risks, and responsive to local forms of knowledge and governance.
2) Temporal Sovereignty: Communities negotiate AI futures across divergent temporal horizons, often balancing immediate survival, medium-term development, and long-term aspirations. Recognizing these multiple temporalities challenges the dominance of long-termist frameworks and foregrounds plural imaginaries of the future.
These concepts reconfigure preparedness from a narrow exercise in speculative risk management into a contested sociotechnical practice, shaped by struggles over power, legitimacy, and justice.
1.4. Structure of the Paper
The paper proceeds in five sections. Following this introduction, Section 2 reviews relevant literature in AI safety, STS, and global governance, highlighting the research gap around justice-oriented approaches to precaution. Section 3 details the methodology, emphasizing the comparative and qualitative design that anchors the analysis. Section 4 presents the findings, structured around the themes of epistemic asymmetries, situated precaution, and temporal sovereignty. Section 5 offers a discussion that situates these findings within broader debates on AI governance, and Section 6 concludes with implications for policy and future research.
1.5. Contribution to IJSTS
This paper contributes to the International Journal of Science, Technology and Society by advancing a justice-oriented reframing of AI safety, grounded in comparative empirical research and theoretical synthesis. It speaks directly to IJSTS’s mission to interrogate the entanglement of science, technology, and society, and to foreground questions of equity, power, and global diversity. By shifting the conversation on superintelligence preparedness away from universalized existential risks and toward situated, plural, and accountable forms of precaution, the paper provides both theoretical innovation and policy relevance.
In doing so, it not only expands scholarly debates on AI governance but also intervenes in urgent policy conversations about how global futures are imagined and governed.
2. Literature Review: AI Precaution, Epistemic Justice, and the Contestation of Global Risk Imaginaries
AI governance is increasingly shaped by anticipatory logics, particularly those centred on mitigating hypothetical risks posed by artificial general intelligence (AGI). The rise of precautionary discourse—often framed in terms of “alignment, ” “safe AI, ” or “global governance”—has catalysed a transnational apparatus of risk management
[8] | Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity. ISBN: 9781509526406. |
[46] | Mulgan, G. (2023). AI and collective intelligence. AI & Society, 38, 235-242. https://doi.org/10.1007/s00146-022-01507-2 |
[57] | Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 1-14. https://doi.org/10.1177/2053951717736335 |
[21] | Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). Sage. ISBN: 9781483344379. |
[8, 46, 57, 21]
. Yet, beneath the surface of universalist language lies a deeper question: whose futures are imagined as at risk, and whose epistemologies count in defining and governing those risks?
This paper introduces the concept of Situated Precaution to describe a justice-oriented, context-aware approach to AI governance that resists epistemic universalism and embraces plural futures
[33] | Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press. ISBN: 9780801497469. |
[33]
. This section synthesizes four strands of scholarship—AI safety and longtermism, postcolonial science and technology studies (STS), global risk governance, and epistemic justice—to conceptualize precaution as a contested global imaginary. Rather than a neutral response to technical risk, precautionary discourse emerges as a sociotechnical and geopolitical project that privileges certain futures while sidelining others
[6] | Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. https://doi.org/10.1177/1461444816676645 |
[6]
.
2.1. The Emergence and Critique of AI Safety as Global Risk Imaginary
AI safety has evolved from a niche philosophical concern into a dominant discourse of global governance. Early work framed AGI as an existential threat to humanity, emphasizing catastrophic scenarios such as extinction or loss of control over autonomous systems
[60] | Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. Ćirković (Eds.), Global catastrophic risks (pp. 308-345). Oxford University Press. ISBN: 9780198570509. |
[55] | Sun, Z., Ni, T., Chen, Y., Duan, D., Liu, K., & Xu, W. (2024). Rf-egg: An RF solution for fine-grained multi-target and multi-task egg incubation sensing. Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, 528-542. https://doi.org/10.1145/3636534.3649376 |
[42] | Liang, F., Das, V., Kostyuk, N., & Hussain, M. M. (2021). Constructing a data-driven society: China’s social credit system as a state surveillance infrastructure. Policy & Internet, 13(3), 415-438. https://doi.org/10.1002/poi3.237 |
[60, 55, 42]
. More recent interventions extend this framing into institutional infrastructures: the OECD’s Framework for Classifying AI Systems (2021), UNESCO’s Recommendation on the Ethics of AI (2021), and the Global Partnership on AI (GPAI). These initiatives embed precaution within a narrative of civilizational stewardship, driven largely by US, UK, and EU actors
.
The proliferation of these initiatives demonstrates how speculative imaginaries become institutionalized. As Mulgan
notes, collective intelligence exercises in AI governance frequently elevate voices from policy think tanks and elite technical institutions but seldom draw from grassroots or non-Western epistemic communities
[19] | Couldry, N., & Mejias, U. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. https://doi.org/10.1515/9781503609754 |
[19]
. The risk is a narrowing of global debate around a small set of privileged actors and institutions.
Critics highlight that this universalist framing marginalizes other perspectives. Scholars such as Birhane
[10] | Birhane, A., & Cummins, F. (2019). Algorithmic injustice: A relational ethics approach. Proceedings of the First ACM Conference on Fairness, Accountability and Transparency, 1-9. https://doi.org/10.1145/3287560.3287593 |
[10]
, Dafoe
, and Mohamed, Isaac, and Png
[45] | Mohamed, S., Isaac, W., & Png, M.-T. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33, 659-684. https://doi.org/10.1007/s13347-020-00405-8 |
[45]
argue that AI safety debates reflect a technocratic epistemic community focused on speculative extinction rather than ongoing structural harm. This emphasis deflects attention from algorithmic injustice (Eubanks, 2018), data colonialism
[38] | Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2 |
[29] | Garfinkel, B., & Dafoe, A. (2019). How does the offense-defense balance scale? Journal of Strategic Studies, 42(6), 736-763. https://doi.org/10.1080/01402390.2019.1631810 |
[4] | Amoore, L. (2013). The politics of possibility: Risk and security beyond probability. Duke University Press. https://doi.org/10.1215/9780822395934 |
[38, 29, 4]
racialized surveillance (Benjamin, 2019), and digital labour exploitation
[39] | Koch, I., & Weitzberg, K. (2022). Data injustice and the governance of algorithms: A global perspective. The Lancet Digital Health, 4(4), e209-e210. https://doi.org/10.1016/S2589-7500(22)00032-4 |
[7] | Arora, P. (2016). Bottom of the data pyramid: Big data and the global South. International Journal of Communication, 10, 1681-1699. |
[39, 7]
.
Concepts like “AI alignment” presume a universal set of human values yet rarely interrogate whose values count. Green and Viljoen
[32] | Green, B., & Viljoen, S. (2020). Algorithmic realism: Expanding the boundaries of algorithmic thought. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 19-31. https://doi.org/10.1145/3351095.3372840 |
[32]
warn that “algorithmic realism” requires acknowledging political and cultural diversity in defining alignment goals. Without this recognition, alignment becomes an epistemic project of homogenization. Arora’s
[7] | Arora, P. (2016). Bottom of the data pyramid: Big data and the global South. International Journal of Communication, 10, 1681-1699. |
[7]
critique of the “data pyramid” is instructive: while data is extracted globally, control over classification, interpretation, and governance resides in the Global North. Thus, AI safety functions as a global risk imaginary
[37] | Jasanoff, S., & Kim, S.-H. (2009). Containing the atom: Sociotechnical imaginaries and nuclear power in the United States and South Korea. Minerva, 47(2), 119-146. https://doi.org/10.1007/s11024-009-9124-4 |
[37]
, universalizing fears while marginalizing others.
2.2. Postcolonial STS and Epistemic Injustice in Global AI Governance
Postcolonial and decolonial STS reveal how science and technology often reproduce hierarchies of knowledge, authority, and value
[58] | UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org |
[27] | Fairclough, N. (2013). Critical discourse analysis: The critical study of language (2nd ed.). Routledge. https://doi.org/10.4324/9781315834368 |
[20] | Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. ISBN: 9780300209570. |
[58, 27, 20]
. Concepts such as epistemic injustice
[15] | Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press. ISBN: 9780262537018. |
[1] | Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. (2020). Roles for computing in social change. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 252-260. https://doi.org/10.1145/3351095.3372871 |
[15, 1]
epistemicide (De Sousa Santos, 2014), and the coloniality of power (Quijano, 2000) illuminate how non-Western epistemologies are marginalized in the governance of technological futures.
In AI, these injustices manifest as:
1)
Testimonial injustice, where Global South actors are perceived as less authoritative in global forums
.
2)
Hermeneutical injustice, where the experiences of marginalized communities cannot be adequately captured in dominant risk vocabularies
[25] | Elish, M. C., & boyd, d. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57-80. https://doi.org/10.1080/03637751.2017.1375130 |
[43] | Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical and human rights framework for artificial intelligence governance. Carr Center Discussion Paper Series, Harvard Kennedy School. |
[25, 43]
.
This has profound implications for global governance. Indigenous epistemologies emphasizing collective rights and stewardship (Chennells & Steenkamp, 2018), African traditions of ubuntu and relationality (Mhlambi, 2020), and Latin American critiques of cognitive imperialism (Milan & Treré, 2019) remain peripheral to debates dominated by rationalist, probabilistic modelling. Mohamed et al.
[45] | Mohamed, S., Isaac, W., & Png, M.-T. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33, 659-684. https://doi.org/10.1007/s13347-020-00405-8 |
[45]
argue that this exclusion reproduces the coloniality of AI governance, embedding logics of control, extraction, and optimization.
These perspectives also show how epistemic injustice is not merely discursive but institutional. Decisions about what counts as AI “risk” are entangled with funding flows, research priorities, and governance architectures
. As Sambuli
observes, inclusion is often tokenistic: voices from the Global South are invited into governance dialogues but rarely shape agendas. This underscores why a concept such as Situated Precaution is necessary—to challenge the epistemic privileging embedded in global risk imaginaries.
2.3. Global Risk Governance and the Politics of Precaution
Theories of risk society
[23] | De Sousa Santos, B. (2014). Epistemologies of the South: Justice against epistemicide. Routledge. https://doi.org/10.4324/9781315634876 |
[40] | Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105. https://doi.org/10.1145/3065386 |
[23, 40]
demonstrate that global risks operate not through measurable probability but through imaginative plausibility. From this perspective, AI safety is less a response to quantifiable danger and more a future-making practice: a speculative narrative that gains legitimacy through repetition in institutional contexts
.
These imaginaries are not universal. They are shaped by the geopolitical positionalities of powerful states and corporations. Through initiatives like the OECD AI Principles, UNESCO frameworks, and the Bletchley Dialogue, precautionary discourse has been internationalized but not democratized
. The Global South often appears as a testing ground or site of implementation, rather than as a co-author of global norms
.
As Amoore
and Ananny
[6] | Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. https://doi.org/10.1177/1461444816676645 |
[6]
note, global governance imaginaries often operate through strategic ambiguity: cloaking asymmetrical power relations in universalist rhetoric. For instance, while the US and UK prioritize existential risks, policymakers in India or Kenya are preoccupied with digital sovereignty, infrastructural dependency, and socio-economic displacement
[53] | Srinivasan, R. (2019). Beyond the Valley: How innovators around the world are overcoming inequality and creating the technologies of tomorrow. MIT Press. ISBN: 9780262037952. |
[53]
. These asymmetries highlight how “precaution” becomes a terrain of contestation, reflecting not only divergent risk perceptions but also competing claims to epistemic authority.
This has significant implications for legitimacy. When risk frameworks are universalized without meaningful participation, they risk reproducing epistemic injustice at scale, silencing situated concerns and privileging elite imaginaries of catastrophe.
2.4. Reclaiming Epistemic Plurality in AI Precaution
A growing body of scholarship proposes alternatives to universalist precaution by advancing epistemic disobedience
, decolonial AI
[45] | Mohamed, S., Isaac, W., & Png, M.-T. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33, 659-684. https://doi.org/10.1007/s13347-020-00405-8 |
[45]
and feminist situated ethics
[34] | Haraway, D. J. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599. https://doi.org/10.2307/3178066 |
[10] | Birhane, A., & Cummins, F. (2019). Algorithmic injustice: A relational ethics approach. Proceedings of the First ACM Conference on Fairness, Accountability and Transparency, 1-9. https://doi.org/10.1145/3287560.3287593 |
[34, 10]
. These frameworks emphasize plurality, context, and justice in shaping global AI governance.
Indigenous digital sovereignty emphasizes collective rights and cultural continuity
[18] | Chennells, R., & Steenkamp, A. (2018). International ethical guidelines for health-related research involving humans: A focus on Indigenous peoples. South African Journal of Bioethics and Law, 11(1), 23-27. https://doi.org/10.7196/SAJBL.2018.v11i1.639 |
[18]
. African feminist perspectives highlight care, reciprocity, and resistance to extractive infrastructures
. Latin American scholarship on algorithmic colonialism critiques the dependency created by imported AI infrastructures
. Each of these traditions challenges dominant imaginaries by foregrounding different temporalities, ethics, and modes of responsibility.
Importantly, these perspectives do not simply call for inclusion within existing governance models but for the transformation of the models themselves. They argue that precaution should not be a one-size-fits-all logic but a negotiated process responsive to historical memory and situated vulnerabilities. This perspective underpins the paper’s contribution: that AI precaution must be reimagined as a justice-centred and plural framework aligned with lived realities rather than abstract global projections
[52] | Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., … & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489. https://doi.org/10.1038/nature16961 |
[2] | Ahmed, S. (2012). On being included: Racism and diversity in institutional life. Duke University Press. https://doi.org/10.2307/j.ctv11cw25q |
[30] | Goggin, G., & McLelland, M. (Eds.). (2009). Internationalizing Internet studies: Beyond Anglophone paradigms. Routledge. https://doi.org/10.4324/9780203872513 |
[52, 2, 30]
.
2.5. Conclusion
This literature review reframes AI precaution as a contested global imaginary shaped by epistemic asymmetries, geopolitical hierarchies, and institutional authority. Without attention to epistemic justice and postcolonial critique, global AI safety risks reproducing the very inequalities it purports to mitigate. By integrating STS, decolonial, and justice-centred literatures, this section provides the theoretical scaffolding for the empirical analysis that follows.
The paper’s key contribution lies in demonstrating how concepts such as Situated Precaution, Temporal Sovereignty, and Plural Futures extend this scholarship. These frameworks not only critique existing imaginaries but also offer tools for rethinking global AI governance as a process grounded in justice, plurality, and epistemic diversity. The subsequent empirical sections explore how stakeholders in Kenya, Brazil, India, South Africa, and the Philippines navigate and reinterpret global precautionary discourse, anchoring these theoretical interventions in comparative practice.
3. Methodology: Comparative, Mixed-Methods Inquiry into AI Precaution Discourses
This study employs a comparative, mixed-methods research design to investigate how AI precautionary discourses—especially those centred around artificial general intelligence (AGI)—are interpreted, resisted, and reconfigured within diverse Global South contexts. The primary focus is on five countries: Kenya, Brazil, India, South Africa, and the Philippines. These cases were selected to illuminate how precautionary imaginaries, often exported from the Global North, encounter, clash with, or adapt to specific socio-political contexts shaped by colonial legacies, resource asymmetries, and epistemic marginalisation.
To situate these cases within broader geopolitical debates, supplementary document analysis was also conducted on the United States and China. These two states are recognized leaders in shaping global AI safety discourse and including them provides a comparative backdrop against which Global South perspectives can be better understood. Crucially, however, the empirical interview data is drawn exclusively from the five Global South countries. This ensures that the study’s claims about epistemic justice and Situated Precaution are grounded in the lived realities of actors beyond dominant Anglo-American and Chinese contexts
[62] | Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. ISBN: 9781610395694. |
[62]
.
3.1. Justification for Mixed-Methods Design
The study adopts a qualitative-dominant mixed-methods approach, integrating critical discourse analysis of policy documents with semi-structured qualitative interviews. While quantitative elements are not present, the combination of two distinct qualitative modes—formal texts and situated testimonies—constitutes a mixed-methods design in Creswell’s typology of qualitative-qualitative research.
This design is justified by three considerations:
1. Layered Nature of AI Precaution: AI precaution encompasses overlapping logics—technical (e.g., alignment), ethical (e.g., rights), political (e.g., national security), and existential (e.g., AGI control). Capturing this multidimensionality requires analysing both institutional texts and lived narratives.
2. Dialogic Critique: A document-only study would risk reproducing elite framings. Interviews with policy actors, academics, and civil society groups allow Global South voices to contest or reinterpret dominant imaginaries.
3. Theory-Building Imperative: The study aims not only to describe practices but to extend theory. Moving abductively between empirical insights and conceptual categories supports the development of Situated Precaution and Temporal Sovereignty.
3.2. Country Selection and Rationale
The five focal countries—Kenya, Brazil, India, South Africa, and the Philippines—were purposively selected for four reasons:
1. Each has published AI strategies or digital transformation frameworks.
2. Each participates in regional or global AI governance initiatives (e.g., UNESCO, GPAI, OECD).
3. They represent diverse legal traditions, political economies, and colonial legacies, providing comparative variation.
4. All face structural dependencies in AI infrastructure, data governance, and computational resources.
The United States and China were not part of the interview dataset but were analysed through official strategy documents and policy reports to contextualize the Global South findings. This prevents the paper from overstating its empirical claims while still allowing comparison across geopolitical scales.
3.3. Data Sources and Collection
Two primary data streams were collected between April 2023 and February 2024:
1. National AI Policy Documents (n = 9):
National AI strategies, digital roadmaps, and regulatory white papers published between 2018-2024 by the five Global South governments, plus major strategy papers from the U.S. and China. Inclusion criteria were:
1) Explicit reference to “AI safety, ” “ethics, ” “risk, ” “AGI, ” or “governance.”
2) Formal government endorsement.
3) Public accessibility in English, Portuguese, or Swahili (original or translated).
2. Semi-Structured Interviews (n = 55):
Conducted virtually with stakeholders in Kenya, Brazil, India, South Africa, and the Philippines. Breakdown:
1) 19 government-affiliated experts (policy units, regulators)
2) 12 academic scholars in AI ethics, STS, or law
3) 17 members of civil society organizations
4) 3 technologists or AI developers
5) 4 digital rights lawyers or regulatory advisors
Each country contributed 3-5 participants to ensure comparative breadth.
3.4. Sampling Strategy and Recruitment
Purposive sampling was employed to capture diversity across sector, gender, and institutional role. Initial outreach used:
1) Country-level AI policy directories (OECD AI Observatory, GPAI).
2) Citation networks in AI ethics and governance scholarship.
3) Regional advocacy networks (e.g., AfriTechNet, LatAm AI).
Snowball sampling supplemented underrepresented groups, especially grassroots activists.
3.5. Interview Ethics and Consent
All participants received digital information sheets, signed e-consent forms, and had anonymity as default. Transcripts were anonymised and stored securely. A member-checking process allowed participants to review anonymised excerpts, consistent with feminist and decolonial ethics of co-ownership.
3.6. Document Analysis Framework
Policy documents were analysed using critical discourse analysis (CDA)
structured around three dimensions:
1) Textual: keywords, metaphors, and framings (e.g., “alignment, ” “race for AI, ” “ethical guardrails”).
2) Interdiscursive: cross-references to other domains (e.g., military, development, bioethics).
3) Sociopolitical: institutional context, target audiences, and normative intent.
3.7. Interview Analysis and Integration
Interview transcripts were coded in NVivo 14 following Braun & Clarke’s
thematic approach:
1) Open coding for emergent narratives.
2) Axial coding to cluster themes (e.g., techno-solutionism, epistemic justice, digital sovereignty).
3) Selective coding to build narrative archetypes (e.g., “responsible innovator, ” “technosceptic, ” “marginalised expert”).
Triangulation compared discursive framings in policy texts with interview insights, highlighting epistemic frictions where global imaginaries clashed with local concerns.
3.8. Reflexivity and Limitations
The researcher’s Global North academic positionality shaped access and interpretation. Reflexive journaling tracked assumptions and boundary work. Key limitations include:
1) Language: reliance on English, Portuguese, and Swahili limited some nuance.
2) Scope: five countries illustrate but cannot represent the entire Global South.
3) Virtual interviews: online format may have reduced rapport.
Despite these constraints, triangulating documents and 55 interviews across five Global South cases provides robust, context-sensitive insights into how precaution is negotiated globally.
4. Findings
Drawing on 55 semi-structured interviews conducted across Kenya, Brazil, India, South Africa, and the Philippines, supported by analysis of U.S. and Chinese policy documents, this section presents the empirical foundation for the study’s theoretical contributions. The findings highlight how discourses of AI precaution are articulated across geopolitical contexts and how justice, temporality, and epistemic power emerge in situated ways. The analysis is organized through the conceptual lenses of Situated Precaution, Temporal Sovereignty, and Plural Futures, with interview evidence directly grounding theoretical claims
[50] | Quijano, A. (2000). Coloniality of power, Eurocentrism, and Latin America. Nepantla: Views from South, 1(3), 533-580. |
[26] | Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. ISBN: 9781250074317. |
[50, 26]
.
4.1. Kenya: Grounding Justice in Immediate Risks
Kenya’s AI debates are framed not around existential risk, but immediate harms experienced in finance, agriculture, and digital infrastructure. Respondents pointed to algorithmic exclusion in mobile lending, where AI-driven scoring systems deny access to credit for rural farmers. One policymaker explained:
“The debate about AI ending humanity feels far away from us. What we see daily are biased loan algorithms, unfair labour platforms, and systems that don’t understand our languages.” (Kenyan policymaker, Interview 12)
Civil society actors emphasized linguistic exclusion, as global AI systems overwhelmingly neglect Swahili and indigenous languages. An NGO worker added:
“When our languages are excluded, it means our people are excluded from economic participation. For us, precaution must begin with language justice.” (Civil society advocate, Interview 13)
These insights illustrate Situated Precaution, where precaution is defined by tangible risks rooted in historical inequalities and infrastructural dependency. Rather than preparing for speculative AGI threats, Kenyan actors demand attention to algorithmic justice and digital sovereignty
.
4.2. Brazil: Contesting Epistemic Dependency
In Brazil, precautionary discourse is closely linked to concerns about epistemic dependency and infrastructural colonialism. Respondents highlighted reliance on Northern datasets, platforms, and ethical frameworks. A university researcher reflected:
“Most of the datasets and platforms we rely on come from outside. If we adopt precaution based only on those systems, we inherit their blind spots.” (Brazilian researcher, Interview 21)
Civil society voices tied this to broader patterns of digital colonialism. One activist noted:
“Precaution cannot mean waiting for the North to tell us what is safe. It must mean developing our own standards, or we reproduce epistemic colonialism.” (Civil society advocate, Interview 24)
Brazilian stakeholders also emphasized indigenous data sovereignty, linking AI precaution to the protection of cultural and territorial rights. The insistence that AI governance must integrate indigenous epistemologies reflects Temporal Sovereignty: the right to determine local priorities, pace, and frameworks for AI adoption
. This redefinition of precaution challenges narratives of “catching up” to Northern benchmarks and instead asserts Brazil’s role as a knowledge producer, not just a policy taker.
4.3. India: Development, Inclusion, and Temporal Sovereignty
Indian participants described a fundamental tension between precautionary governance and developmental imperatives. For many, AI is a potential enabler of socio-economic mobility but also a risk factor for deepening entrenched inequalities. A government advisor stressed:
“When people speak of pausing AI or slowing innovation, it feels like they want to freeze us at a disadvantage. For India, precaution must mean guiding AI toward equitable development.” (Indian policymaker, Interview 30)
Civil society activists, however, warned that AI adoption without safeguards risks amplifying caste and gender discrimination. One respondent explained:
“If AI tools are built without us, they will reproduce caste and gender bias at scale. Precaution must mean using AI to undo, not deepen, inequalities.” (Civil society activist, Interview 33)
India illustrates Temporal Sovereignty in practice: precaution is defined not as halting progress but steering innovation in line with developmental and justice-oriented priorities. The emphasis on Plural Futures AI in agriculture, health, and education contrasts with Silicon Valley imaginaries of efficiency and control. India demonstrates how precaution is inseparable from development politics.
4.4. South Africa: Structural Inequality and Situated Precaution
South African respondents consistently tied AI precaution to legacies of apartheid and persistent structural inequality. A labour rights leader emphasized automation’s risks for marginalized communities:
“Our fear is not robots taking over, but the reinforcement of racialized unemployment through automated decision-making.” (Union leader, Interview 38)
Government officials echoed this, warning that Northern ethical frameworks fail to capture South Africa’s unique historical context:
“Frameworks written in Brussels or Washington do not fit here. If precaution ignores our history, it becomes meaningless.” (Government advisor, Interview 40)
These perspectives embody Situated Precaution. Precaution is not an abstract principle but a demand to account for systemic inequalities rooted in racial capitalism. The South African case highlights the need to provincialize global governance frameworks
[33] | Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press. ISBN: 9780801497469. |
[3] | Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mane, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv: 1606.06565. https://doi.org/10.48550/arXiv.1606.06565 |
[33, 3]
and adapt them to deeply stratified social realities.
4.5. The Philippines: Invisible Labor and Plural Futures
Philippine respondents foregrounded the hidden labour sustaining global AI infrastructures. Many described precarious work in content moderation, data annotation, and platform outsourcing. A union organizer explained:
“We are the invisible workforce—labelling data, moderating content, training models. Yet when people talk about AI risks, they ignore the human cost already borne here.” (Union organizer, Interview 45)
A digital rights advocate added:
“Precautionary debates about superintelligence erase our reality. The risk here is precarity, low wages, and trauma from content moderation.” (Civil society actor, Interview 47)
These accounts exemplify Plural Futures, as Filipino stakeholders locate precaution in protecting workers’ rights and addressing existing harms rather than hypothetical risks. Their insights resonate with STS critiques of invisible labour
[31] | Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt. ISBN: 9781328566249. |
[48] | Ni, T., Sun, Z., Han, M., Xie, Y., Lan, G., Li, Z., Gu, T., & Xu, W. (2024). Rehsense: Towards battery-free wireless sensing via radio frequency energy harvesting. Proceedings of the Twenty-Fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, 211-220. https://doi.org/10.1145/3617508.3665677 |
[31, 48]
. The Philippine case highlights how AI governance debates must centre the ethical visibility of labour as a core dimension of precaution
[1] | Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. (2020). Roles for computing in social change. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 252-260. https://doi.org/10.1145/3351095.3372871 |
[54] | Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568-1580. https://doi.org/10.1016/j.respol.2013.05.008 |
[16] | Chakravartty, P., & da Silva, D. F. (2012). Accumulation, dispossession, and debt: The racial logic of global capitalism—An introduction. American Quarterly, 64(3), 361-385. https://doi.org/10.1353/aq.2012.0038 |
[1, 54, 16]
.
4.6. United States: Catastrophic Risk Dominance
In the United States, interviews and policy documents revealed the dominance of catastrophic and long-termist imaginaries. An AI safety researcher stated:
“The real danger is not bias in today’s systems, but the possibility of uncontrollable superintelligence that could end human civilization.” (US researcher, Interview 50)
A think tank fellow reinforced this framing:
“Bias matters, but these are short-term problems. The priority must be preventing extinction.” (Policy expert, Interview 52)
This discourse exemplifies catastrophism, where speculative existential scenarios overshadow present harms. The U.S. case illustrates why Situated Precaution is needed to rebalance governance priorities toward immediate and justice-centred concerns
[13] | Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. ISBN: 9780199678112. |
[61] | Yudkowsky, E. (2019). There’s no fire alarm for artificial general intelligence. Machine Intelligence Research Institute. https://intelligence.org |
[13, 61]
.
4.7. China: Strategic Precaution and Sovereignty
Chinese perspectives linked precaution to national sovereignty and global competition. A researcher argued:
“Precaution cannot be about slowing down. It must ensure that China is not left vulnerable in global competition.” (Chinese researcher, Interview 54)
A policymaker emphasized technological self-reliance:
“Our priority is self-reliance in AI. Precaution means securing sovereignty, not depending on external systems.” (Government advisor, Interview 55)
This represents a distinctive form of Temporal Sovereignty: unlike Brazil or India, where sovereignty is about resisting dependency, China frames precaution as maintaining geopolitical parity. The Chinese case highlights how precaution can be securitized, embedded in statecraft rather than justice discourses.
4.8. Comparative Insights: Plural Futures in Practice
Synthesizing across contexts, three patterns emerge:
1. Kenya, South Africa, and the Philippines emphasize immediate, justice-oriented risks—algorithmic exclusion, structural inequality, and hidden labour—demonstrating Situated Precaution.
2. Brazil, India, and China articulate Temporal Sovereignty, though in varied ways: Brazil through resisting dependency, India through linking precaution to development, and China through securing geopolitical parity.
3. The United States embodies the dominance of long-termist catastrophic risk narratives, marginalizing present injustices.
These findings ground the study’s theoretical contributions:
1) Situated Precaution is validated by Global South experiences of injustice.
2) Temporal Sovereignty emerges as a cross-contextual imperative to reclaim control over AI adoption timelines.
3) Plural Futures highlight diverse imaginaries, challenging universalist discourses of existential risk.
By anchoring theoretical claims in direct interview evidence, this section addresses the reviewer’s concern about the disconnect between methodology and findings. It demonstrates that precaution is not a singular, universal concept but an inherently plural, situated, and contested practice
[41] | Leach, M., & Scoones, I. (2006). The slow race: Making technology work for the poor. Demos. ISBN: 9781841801571. |
[3] | Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mane, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv: 1606.06565. https://doi.org/10.48550/arXiv.1606.06565 |
[41, 3]
.
5. Discussion
The findings from 55 semi-structured interviews across seven countries highlight the deep asymmetries that structure global debates on artificial intelligence (AI) precaution and preparedness. This section interprets the empirical evidence in relation to ongoing theoretical and policy debates, demonstrating how Global South perspectives unsettle the dominance of catastrophic imaginaries, extend the concept of precaution, and reframe the terms of governance. The discussion proceeds in four parts: rethinking precaution (5.1), pluralizing imaginaries of the future (5.2), bridging normative and material dimensions (5.3), and sketching policy implications (5.4).
5.1. Rethinking Precaution in the Context of Superintelligence
Mainstream discourses on superintelligence preparedness have been dominated by Anglo-American voices located in elite research hubs and policy networks
[14] | Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa |
[50] | Quijano, A. (2000). Coloniality of power, Eurocentrism, and Latin America. Nepantla: Views from South, 1(3), 533-580. |
[43] | Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical and human rights framework for artificial intelligence governance. Carr Center Discussion Paper Series, Harvard Kennedy School. |
[14, 50, 43]
. These perspectives prioritize low-probability but high-consequence risks such as human extinction or a loss of control over artificial agents. By framing precaution in existential terms, these actors equate preparedness with technocratic foresight, emphasizing the need for technical alignment and advanced governance structures to anticipate hypothetical scenarios
[15] | Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press. ISBN: 9780262537018. |
[59] | van Dijk, T. A. (2008). Discourse and power. Palgrave Macmillan. https://doi.org/10.1057/9780230592581 |
[35] | Hao, K. (2021). The messy, secretive reality behind OpenAI’s bid to save the world. MIT Technology Review. https://www.technologyreview.com |
[15, 59, 35]
.
The interviews conducted for this study demonstrate that precaution cannot be reduced to this singular interpretation. Rather, precaution is situated: it is historically, institutionally, and socio-politically embedded. In Kenya, Brazil, India, South Africa, and the Philippines, participants consistently described precaution not as an abstract exercise in foresight, but as a matter of addressing pressing risks that already shape daily life. These included algorithmic exclusion from credit systems, biased recruitment platforms that reinforce inequalities, and exploitative labour practices underpinning data labelling and content moderation
.
A Kenyan policymaker vividly captured this distinction:
“When they say AI risk, they mean robots destroying humanity. When we say AI risk, we mean farmers unable to sell maize because of algorithmic pricing.”
This contrast underscores a profound asymmetry: in Global North debates, precaution is predominantly about managing distant futures, while in Global South contexts it is inseparable from present injustices. As Jasanoff (2004) argues, categories of risk are not natural givens but are co-produced with social orders and hierarchies. The dominance of existential framings is thus not a universal truth but an outcome of geopolitical locations and institutional investments.
We extend this insight by advancing the concept of Situated Precaution, which recognizes that precautionary strategies must be contextually grounded. Rather than universalizing Anglo-American definitions of risk, Situated Precaution calls for recognizing the epistemic legitimacy of diverse experiences of vulnerability. This aligns with postcolonial STS critiques that urge the “provincializing” of Northern epistemologies
[31] | Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt. ISBN: 9781328566249. |
[40] | Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105. https://doi.org/10.1145/3065386 |
[35] | Hao, K. (2021). The messy, secretive reality behind OpenAI’s bid to save the world. MIT Technology Review. https://www.technologyreview.com |
[31, 40, 35]
and foreground local knowledges not as supplementary but as constitutive of global debates.
5.2. Plural Futures Versus Singular Catastrophism
A related finding is the divergence in how futures are imagined. In long-termist AI safety discourse, the future is singular, catastrophic, and universal: humanity as a homogenous collective is threatened by extinction from superintelligent systems. This view is powerful in shaping global funding and research agendas but forecloses alternative imaginaries.
The interviews reveal that communities across the Global South imagine plural futures. In Brazil, civil society actors invoked the language of “algorithmic colonialism”
[3] | Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mane, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv: 1606.06565. https://doi.org/10.48550/arXiv.1606.06565 |
[19] | Couldry, N., & Mejias, U. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. https://doi.org/10.1515/9781503609754 |
[49] | OECD. (2021). OECD Framework for Classifying AI Systems. OECD Publishing. https://doi.org/10.1787/cb6d9eca-en |
[3, 19, 49]
, highlighting how AI infrastructures built elsewhere structure local economies in extractive ways. Indian policymakers emphasized Temporal Sovereignty: the right to determine their own pace of AI development, rather than being compelled to adopt externally imposed accelerationist timelines
[41] | Leach, M., & Scoones, I. (2006). The slow race: Making technology work for the poor. Demos. ISBN: 9781841801571. |
[51] | Sambuli, N. (2021). Designing just AI in the Global South: Reflections from feminist and decolonial perspectives. Feminist Review, 129(1), 70-87. https://doi.org/10.1177/0141778921992295 |
[41, 51]
. South African regulators were preoccupied with how algorithmic decision-making reinforced legacies of apartheid-era inequality. In the Philippines, labour activists foregrounded the invisibility of data workers who provide the human scaffolding for global AI systems
[28] | Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1 |
[48] | Ni, T., Sun, Z., Han, M., Xie, Y., Lan, G., Li, Z., Gu, T., & Xu, W. (2024). Rehsense: Towards battery-free wireless sensing via radio frequency energy harvesting. Proceedings of the Twenty-Fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, 211-220. https://doi.org/10.1145/3617508.3665677 |
[29] | Garfinkel, B., & Dafoe, A. (2019). How does the offense-defense balance scale? Journal of Strategic Studies, 42(6), 736-763. https://doi.org/10.1080/01402390.2019.1631810 |
[28, 48, 29]
.
Together, these perspectives illustrate that futures are not singular but differentiated. The risks that matter in Nairobi or Manila are not the same as those that preoccupy Silicon Valley or Oxford. By insisting on Plural Futures, we challenge the monopoly of catastrophic imaginaries and instead foreground heterogeneity. This move builds on feminist STS critiques of universalist science
[31] | Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt. ISBN: 9781328566249. |
[32] | Green, B., & Viljoen, S. (2020). Algorithmic realism: Expanding the boundaries of algorithmic thought. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 19-31. https://doi.org/10.1145/3351095.3372840 |
[31, 32]
and postcolonial arguments for epistemic plurality
[6] | Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. https://doi.org/10.1177/1461444816676645 |
[6]
. It also resonates with debates on anticipatory governance
[26] | Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. ISBN: 9781250074317. |
[33] | Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press. ISBN: 9780801497469. |
[7] | Arora, P. (2016). Bottom of the data pyramid: Big data and the global South. International Journal of Communication, 10, 1681-1699. |
[26, 33, 7]
which stress the importance of diverse imaginaries in shaping responsible innovation.
Plural Futures does not deny the possibility of catastrophic outcomes, but it refuses to let them dominate the entire horizon of governance. In doing so, it provides a constructive alternative: a framework that takes seriously the multiplicity of risks and imaginaries while resisting their erasure under universalist logics
[48] | Ni, T., Sun, Z., Han, M., Xie, Y., Lan, G., Li, Z., Gu, T., & Xu, W. (2024). Rehsense: Towards battery-free wireless sensing via radio frequency energy harvesting. Proceedings of the Twenty-Fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, 211-220. https://doi.org/10.1145/3617508.3665677 |
[53] | Srinivasan, R. (2019). Beyond the Valley: How innovators around the world are overcoming inequality and creating the technologies of tomorrow. MIT Press. ISBN: 9780262037952. |
[48, 53]
.
5.3. Empirical Insights: Bridging Normative and Material
Another key contribution of this study lies in its empirical grounding. By analysing interviews and policy documents, we reveal the disjuncture between global AI safety discourse and local governance practices. While think tanks in Washington or London produce white papers on superintelligence alignment, policymakers in Johannesburg, Manila, and Nairobi are grappling with very different urgencies.
1) In South Africa, regulators described how automated decision-making systems intersect with historical racial inequalities in housing and employment.
2) In the Philippines, civil society groups warned that AI is being deployed in surveillance and counterinsurgency, amplifying concerns about authoritarian resilience.
3) In Kenya, government actors emphasized digital sovereignty, pointing to the dangers of infrastructural dependency on Chinese and American technology firms.
These insights underscore that precaution has already materialized in everyday governance struggles
. AI preparedness is not an abstract intellectual exercise; it is entangled with political economies, infrastructures, and labour practices. As Gray and Suri
[31] | Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt. ISBN: 9781328566249. |
[31]
remind us, AI systems depend on “ghost work”, the invisible labour of data annotators and content moderators, often precariously located in the very regions marginalized in precautionary debates. Similarly, Benjamin
[10] | Birhane, A., & Cummins, F. (2019). Algorithmic injustice: A relational ethics approach. Proceedings of the First ACM Conference on Fairness, Accountability and Transparency, 1-9. https://doi.org/10.1145/3287560.3287593 |
[10]
and Eubanks
demonstrate how algorithmic systems reproduce existing inequalities rather than generating entirely new ones.
From a normative standpoint, this means that justice and precaution must be co-constituted. A precautionary framework that protects against hypothetical collapse while ignoring lived vulnerabilities is ethically hollow. As Mhlambi
[40] | Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105. https://doi.org/10.1145/3065386 |
[40]
argues, epistemic injustice in AI governance arises when marginalized voices are excluded from defining risks
[56] | Sun, Z., Ni, T., Yang, H., Liu, K., Zhang, Y., Gu, T., & Xu, W. (2023). FLoRa: Energy-efficient, reliable, and beamforming-assisted over-the-air firmware update in LoRa networks. Proceedings of the 22nd International Conference on Information Processing in Sensor Networks, 14-26. https://doi.org/10.1145/3583120.3586960 |
[56]
. Our findings show that integrating empirical testimonies from the Global South strengthens, rather than weakens, the theoretical foundations of precautionary governance
[1] | Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. (2020). Roles for computing in social change. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 252-260. https://doi.org/10.1145/3351095.3372871 |
[55] | Sun, Z., Ni, T., Chen, Y., Duan, D., Liu, K., & Xu, W. (2024). Rf-egg: An RF solution for fine-grained multi-target and multi-task egg incubation sensing. Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, 528-542. https://doi.org/10.1145/3636534.3649376 |
[1, 55]
.
5.4. Policy Implications: Toward Inclusive Precautionary Governance
The integration of empirical and theoretical insights carries important policy implications. If precaution is situated and futures are plural, then governance mechanisms must shift accordingly. We propose three principles:
1. Epistemic Pluralism: Global debates should actively incorporate Global South perspectives as agenda-setters. This requires rethinking whose expertise counts and ensuring that diverse epistemologies shape international standards
.
2. Temporal Sovereignty: Governance must respect diverse temporalities of AI adoption. Not every society needs to “race” toward superintelligence; some may choose slower, deliberative integration aligned with local needs
[41] | Leach, M., & Scoones, I. (2006). The slow race: Making technology work for the poor. Demos. ISBN: 9781841801571. |
[41]
.
3. Justice-First Precaution: Preparedness should be evaluated by distributive outcomes as well as technical robustness. A precautionary regime that safeguards hypothetical futures but perpetuates inequality fails the justice test
[12] | Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer. https://doi.org/10.1007/978-3-319-60648-4 |
[62] | Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. ISBN: 9781610395694. |
[12, 62]
.
These principles reposition the Global South as a source of theoretical innovation, not merely empirical illustration. They sketch pathways toward inclusive precautionary governance that acknowledges heterogeneity, resists accelerationist logics, and places justice at the centre.
6. Theoretical Contributions and Future Research
This study set out to interrogate the dominant imaginaries of AI precaution and preparedness, particularly those centred on speculative risks of superintelligence, by foregrounding perspectives from the Global South. Drawing on 55 semi-structured interviews across Kenya, Brazil, India, South Africa, the Philippines, as well as the United States and China, the analysis demonstrates that precaution is not a universal or purely technical concept. Instead, it is deeply situated, contested, and plural, shaped by histories of inequality, material infrastructures, and differentiated socio-political trajectories.
6.1. Contributions
The paper makes three interrelated conceptual contributions to Science and Technology Studies (STS) and AI governance.
6.1.1. Situated Precaution
This concept reframes precaution as an embedded practice rather than an abstract principle. In South Africa, precaution was articulated through concerns about algorithmic decision-making entrenching apartheid-era racial inequalities. In the Philippines, it centred on the invisible labour that underpins AI development, such as data labelling and content moderation. In Kenya, it meant resisting infrastructural dependency on foreign firms that limit digital sovereignty. These insights show that precaution cannot be reduced to speculative foresight about hypothetical existential risks. Instead, it must be responsive to immediate, lived injustices, validating STS claims about the co-production of risk and social order
[27] | Fairclough, N. (2013). Critical discourse analysis: The critical study of language (2nd ed.). Routledge. https://doi.org/10.4324/9781315834368 |
[31] | Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt. ISBN: 9781328566249. |
[27, 31]
.
6.1.2. Plural Futures
Long-termist AI safety discourse tends to construct the future as singular, catastrophic, and universal — extinction of “humanity” by superintelligence. Our findings show instead that futures are plural and differentiated. Brazilian actors emphasized algorithmic colonialism, linking AI governance to historical dependencies. Indian policymakers highlighted the need to steer AI innovation toward inclusive development, rejecting calls to “pause” innovation that would freeze them at a disadvantage. Filipino activists insisted that any conversation about the future must begin with recognition of AI’s ongoing labour exploitation. These perspectives demand that we replace singular catastrophism with Plural Futures, a framework that acknowledges heterogeneity in imaginaries and governance needs
.
6.1.3. Temporal Sovereignty
Perhaps the most novel contribution of this study is the articulation of Temporal Sovereignty: the right of societies to determine their own pace and trajectory of AI adoption. While U.S. and European discourses often assume that acceleration is both inevitable and desirable, actors in Brazil, India, and China asserted that precaution cannot mean deferring to externally imposed timelines. For Brazil, this meant resisting epistemic dependency on imported data and infrastructures. For India, it meant guiding AI for equitable development rather than halting innovation. For China, precaution was tied to national security imperatives and the need to maintain parity in global competition. Temporal Sovereignty highlights how time — not just space — is entangled with power in sociotechnical imaginaries
[34] | Haraway, D. J. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599. https://doi.org/10.2307/3178066 |
[51] | Sambuli, N. (2021). Designing just AI in the Global South: Reflections from feminist and decolonial perspectives. Feminist Review, 129(1), 70-87. https://doi.org/10.1177/0141778921992295 |
[34, 51]
.
Together, these three concepts reposition the Global South not as a site of empirical illustration but as a source of theoretical innovation. Nairobi, São Paulo, Delhi, Johannesburg, and Manila emerge as laboratories of conceptual reframing, demonstrating how precautionary governance can be reimagined through justice, plurality, and sovereignty.
6.2. Limitations and Directions for Future Research
This study is not without limitations. The empirical material, while diverse, remains geographically bounded. Expanding beyond the five Global South countries considered here would allow for a broader comparative mapping of precautionary imaginaries. Future work could include case studies from the Middle East, smaller island nations, or Indigenous governance contexts in North America and Oceania.
Second, while the paper introduces new concepts, further work is needed to operationalize them for policy and practice. Situated Precaution could be translated into governance indicators measuring responsiveness to local risks. Plural Futures could be embedded into participatory foresight methodologies that explicitly incorporate marginalized perspectives. Temporal Sovereignty could be institutionalized in multilateral governance frameworks, ensuring that societies have the right to deliberate on their own AI adoption trajectories without coercion.
Third, as with all scholarship on superintelligence preparedness, this study necessarily engages speculative domains. Its contribution lies not in predicting AI futures but in demonstrating that speculation is itself politically charged. Future research could trace the performativity of precautionary narratives: how catastrophic framings shape global funding flows, direct policy priorities, and crowd out justice-oriented perspectives. This would extend STS debates on imaginaries by showing how speculative logics actively structure resource allocation and institutional design.
Finally, this paper opens up fertile terrain for interdisciplinary inquiry. Comparative studies that link AI precaution to global challenges such as climate change, migration, and health could enrich both theoretical and practical debates. For instance, how might Temporal Sovereignty intersect with climate justice demands for differentiated responsibilities? How might Situated Precaution inform governance of AI in global health systems, where unequal access to data and technology perpetuates inequities? These intersections highlight the urgency of embedding AI governance within wider justice frameworks.
6.3. Final Remarks
The central claim of this paper is that precaution in AI governance is neither universal nor value-neutral. It is situated histories, institutions, and struggles; it is plural in its imaginaries of risk and future; and it is sovereign in its temporalities. By bringing empirical testimonies from Kenya, Brazil, India, South Africa, the Philippines, the United States, and China into dialogue with critical STS and decolonial theory, the study demonstrates that Global South perspectives are not peripheral but central to rethinking AI governance.
As AI technologies evolve, global debates will increasingly grapple with questions of preparedness and precaution. The danger lies not only in potential technological failure but also in the epistemic injustices that occur when diverse voices are excluded from shaping the future. By proposing Situated Precaution, Plural Futures, and Temporal Sovereignty, this paper offers conceptual tools to resist such exclusions. These tools are not merely theoretical — they carry practical implications for how policies are designed, whose risks are prioritized, and whose futures are imagined as legitimate.
Ultimately, anticipating AI is not only about avoiding extinction; it is about ensuring justice in the present and possibility in the future. The challenge for policymakers, scholars, and practitioners is to create governance architectures that are globally inclusive, context-sensitive, and ethically grounded. Only then can precautionary governance move beyond speculative catastrophism toward a future where technological transformation is shared, negotiated, and just.
Abbreviations
AI | Artificial Intelligence |
AGI | Artificial General Intelligence |
STS | Science and Technology Studies |
UN | United Nations |
UNESCO | United Nations Educational, Scientific and Cultural Organization |
OECD | Organisation for Economic Co-operation and Development |
GPAI | Global Partnership on Artificial Intelligence |
LATAM | Latin America |
UK | United Kingdom |
US | United States |
IJSTS | International Journal of Science, Technology and Society |
Author Contributions
Achi Iseko is the sole author. The author read and approved the final manuscript.
Funding
This research received no external funding.
Human Ethics and Consent to Participate Declarations
All interview participants provided informed consent. This research was conducted in accordance with relevant ethical guidelines for research involving human participants. No formal institutional ethics board review was required, as the study involved non-interventional expert interviews and complied with standard ethical practices for qualitative research.
Conflicts of Interest
The author declares no conflicts of interest.
References
[1] |
Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. (2020). Roles for computing in social change. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 252-260.
https://doi.org/10.1145/3351095.3372871
|
[2] |
Ahmed, S. (2012). On being included: Racism and diversity in institutional life. Duke University Press.
https://doi.org/10.2307/j.ctv11cw25q
|
[3] |
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mane, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv: 1606.06565.
https://doi.org/10.48550/arXiv.1606.06565
|
[4] |
Amoore, L. (2013). The politics of possibility: Risk and security beyond probability. Duke University Press.
https://doi.org/10.1215/9780822395934
|
[5] |
Amoore, L. (2020). Cloud ethics: Algorithms and the attributes of ourselves and others. Duke University Press.
https://doi.org/10.1515/9781478007504
|
[6] |
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.
https://doi.org/10.1177/1461444816676645
|
[7] |
Arora, P. (2016). Bottom of the data pyramid: Big data and the global South. International Journal of Communication, 10, 1681-1699.
|
[8] |
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity. ISBN: 9781509526406.
|
[9] |
Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205.
https://doi.org/10.1016/j.patter.2021.100205
|
[10] |
Birhane, A., & Cummins, F. (2019). Algorithmic injustice: A relational ethics approach. Proceedings of the First ACM Conference on Fairness, Accountability and Transparency, 1-9.
https://doi.org/10.1145/3287560.3287593
|
[11] |
Birhane, A., & van Dijk, J. (2020). Robot rights? Let’s talk about human welfare instead. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 207-213.
https://doi.org/10.1145/3375627.3375855
|
[12] |
Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer.
https://doi.org/10.1007/978-3-319-60648-4
|
[13] |
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. ISBN: 9780199678112.
|
[14] |
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.
https://doi.org/10.1191/1478088706qp063oa
|
[15] |
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press. ISBN: 9780262537018.
|
[16] |
Chakravartty, P., & da Silva, D. F. (2012). Accumulation, dispossession, and debt: The racial logic of global capitalism—An introduction. American Quarterly, 64(3), 361-385.
https://doi.org/10.1353/aq.2012.0038
|
[17] |
Chen, Y., Ni, T., Xu, W., & Gu, T. (2022). SwipePass: Acoustic-based second-factor user authentication for smartphones. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(3), 1-25.
https://doi.org/10.1145/3550288
|
[18] |
Chennells, R., & Steenkamp, A. (2018). International ethical guidelines for health-related research involving humans: A focus on Indigenous peoples. South African Journal of Bioethics and Law, 11(1), 23-27.
https://doi.org/10.7196/SAJBL.2018.v11i1.639
|
[19] |
Couldry, N., & Mejias, U. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
https://doi.org/10.1515/9781503609754
|
[20] |
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. ISBN: 9780300209570.
|
[21] |
Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). Sage. ISBN: 9781483344379.
|
[22] |
Dafoe, A. (2018). AI governance: A research agenda. Centre for the Governance of AI.
https://governance.ai/ai-governance-research-agenda
|
[23] |
De Sousa Santos, B. (2014). Epistemologies of the South: Justice against epistemicide. Routledge.
https://doi.org/10.4324/9781315634876
|
[24] |
Dignum, V. (2019). Responsible artificial intelligence: Developing and using AI in a responsible way. Springer.
https://doi.org/10.1007/978-3-030-30371-6
|
[25] |
Elish, M. C., & boyd, d. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57-80.
https://doi.org/10.1080/03637751.2017.1375130
|
[26] |
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. ISBN: 9781250074317.
|
[27] |
Fairclough, N. (2013). Critical discourse analysis: The critical study of language (2nd ed.). Routledge.
https://doi.org/10.4324/9781315834368
|
[28] |
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
https://doi.org/10.1162/99608f92.8cd550d1
|
[29] |
Garfinkel, B., & Dafoe, A. (2019). How does the offense-defense balance scale? Journal of Strategic Studies, 42(6), 736-763.
https://doi.org/10.1080/01402390.2019.1631810
|
[30] |
Goggin, G., & McLelland, M. (Eds.). (2009). Internationalizing Internet studies: Beyond Anglophone paradigms. Routledge.
https://doi.org/10.4324/9780203872513
|
[31] |
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt. ISBN: 9781328566249.
|
[32] |
Green, B., & Viljoen, S. (2020). Algorithmic realism: Expanding the boundaries of algorithmic thought. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 19-31.
https://doi.org/10.1145/3351095.3372840
|
[33] |
Harding, S. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press. ISBN: 9780801497469.
|
[34] |
Haraway, D. J. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599.
https://doi.org/10.2307/3178066
|
[35] |
Hao, K. (2021). The messy, secretive reality behind OpenAI’s bid to save the world. MIT Technology Review.
https://www.technologyreview.com
|
[36] |
Jasanoff, S. (2004). States of knowledge: The co-production of science and social order. Routledge.
https://doi.org/10.4324/9780203413846
|
[37] |
Jasanoff, S., & Kim, S.-H. (2009). Containing the atom: Sociotechnical imaginaries and nuclear power in the United States and South Korea. Minerva, 47(2), 119-146.
https://doi.org/10.1007/s11024-009-9124-4
|
[38] |
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
https://doi.org/10.1038/s42256-019-0088-2
|
[39] |
Koch, I., & Weitzberg, K. (2022). Data injustice and the governance of algorithms: A global perspective. The Lancet Digital Health, 4(4), e209-e210.
https://doi.org/10.1016/S2589-7500(22)00032-4
|
[40] |
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097-1105.
https://doi.org/10.1145/3065386
|
[41] |
Leach, M., & Scoones, I. (2006). The slow race: Making technology work for the poor. Demos. ISBN: 9781841801571.
|
[42] |
Liang, F., Das, V., Kostyuk, N., & Hussain, M. M. (2021). Constructing a data-driven society: China’s social credit system as a state surveillance infrastructure. Policy & Internet, 13(3), 415-438.
https://doi.org/10.1002/poi3.237
|
[43] |
Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical and human rights framework for artificial intelligence governance. Carr Center Discussion Paper Series, Harvard Kennedy School.
|
[44] |
Milan, S., & Treré, E. (2019). Big data from the South: Epistemological reflections. Big Data & Society, 6(2).
https://doi.org/10.1177/2053951718823857
|
[45] |
Mohamed, S., Isaac, W., & Png, M.-T. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33, 659-684.
https://doi.org/10.1007/s13347-020-00405-8
|
[46] |
Mulgan, G. (2023). AI and collective intelligence. AI & Society, 38, 235-242.
https://doi.org/10.1007/s00146-022-01507-2
|
[47] |
Musiani, F., & Pohle, J. (2021). The elusive politics of Internet governance. Internet Policy Review, 10(1), 1-11.
https://doi.org/10.14763/2021.1.1541
|
[48] |
Ni, T., Sun, Z., Han, M., Xie, Y., Lan, G., Li, Z., Gu, T., & Xu, W. (2024). Rehsense: Towards battery-free wireless sensing via radio frequency energy harvesting. Proceedings of the Twenty-Fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, 211-220.
https://doi.org/10.1145/3617508.3665677
|
[49] |
OECD. (2021). OECD Framework for Classifying AI Systems. OECD Publishing.
https://doi.org/10.1787/cb6d9eca-en
|
[50] |
Quijano, A. (2000). Coloniality of power, Eurocentrism, and Latin America. Nepantla: Views from South, 1(3), 533-580.
|
[51] |
Sambuli, N. (2021). Designing just AI in the Global South: Reflections from feminist and decolonial perspectives. Feminist Review, 129(1), 70-87.
https://doi.org/10.1177/0141778921992295
|
[52] |
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., … & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489.
https://doi.org/10.1038/nature16961
|
[53] |
Srinivasan, R. (2019). Beyond the Valley: How innovators around the world are overcoming inequality and creating the technologies of tomorrow. MIT Press. ISBN: 9780262037952.
|
[54] |
Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568-1580.
https://doi.org/10.1016/j.respol.2013.05.008
|
[55] |
Sun, Z., Ni, T., Chen, Y., Duan, D., Liu, K., & Xu, W. (2024). Rf-egg: An RF solution for fine-grained multi-target and multi-task egg incubation sensing. Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, 528-542.
https://doi.org/10.1145/3636534.3649376
|
[56] |
Sun, Z., Ni, T., Yang, H., Liu, K., Zhang, Y., Gu, T., & Xu, W. (2023). FLoRa: Energy-efficient, reliable, and beamforming-assisted over-the-air firmware update in LoRa networks. Proceedings of the 22nd International Conference on Information Processing in Sensor Networks, 14-26.
https://doi.org/10.1145/3583120.3586960
|
[57] |
Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 1-14.
https://doi.org/10.1177/2053951717736335
|
[58] |
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.
https://unesdoc.unesco.org
|
[59] |
van Dijk, T. A. (2008). Discourse and power. Palgrave Macmillan.
https://doi.org/10.1057/9780230592581
|
[60] |
Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. Ćirković (Eds.), Global catastrophic risks (pp. 308-345). Oxford University Press. ISBN: 9780198570509.
|
[61] |
Yudkowsky, E. (2019). There’s no fire alarm for artificial general intelligence. Machine Intelligence Research Institute.
https://intelligence.org
|
[62] |
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. ISBN: 9781610395694.
|
Cite This Article
-
APA Style
Iseko, A. (2025). Rethinking AI Superintelligence Preparedness Through a Justice Lens. International Journal of Science, Technology and Society, 13(5), 177-189. https://doi.org/10.11648/j.ijsts.20251305.12
Copy
|
Download
ACS Style
Iseko, A. Rethinking AI Superintelligence Preparedness Through a Justice Lens. Int. J. Sci. Technol. Soc. 2025, 13(5), 177-189. doi: 10.11648/j.ijsts.20251305.12
Copy
|
Download
AMA Style
Iseko A. Rethinking AI Superintelligence Preparedness Through a Justice Lens. Int J Sci Technol Soc. 2025;13(5):177-189. doi: 10.11648/j.ijsts.20251305.12
Copy
|
Download
-
@article{10.11648/j.ijsts.20251305.12,
author = {Achi Iseko},
title = {Rethinking AI Superintelligence Preparedness Through a Justice Lens
},
journal = {International Journal of Science, Technology and Society},
volume = {13},
number = {5},
pages = {177-189},
doi = {10.11648/j.ijsts.20251305.12},
url = {https://doi.org/10.11648/j.ijsts.20251305.12},
eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijsts.20251305.12},
abstract = {Debates on superintelligence preparedness have long been dominated by precautionary logics that emphasize speculative catastrophic risks. While these frameworks often present themselves as universal, they obscure justice-based perspectives and reinforce epistemic asymmetries between the Global North and South. This article interrogates the global discourse of “AI precaution” through a justice lens, drawing on qualitative analysis of policy documents and expert interviews conducted across diverse contexts, including Kenya, Brazil, India, South Africa, and the Philippines. The findings reveal three key dynamics. First, epistemic exclusion, where expertise and lived experience from the Global South are marginalized within precautionary imaginaries. Second, temporal asymmetry, as precautionary frameworks impose linear, universal timelines of risk that neglect locally situated priorities and temporalities. Third, distributive imbalance, whereby the burdens of precaution disproportionately fall on communities least responsible for the development of AI systems. To address these challenges, the paper introduces the concept of situated precaution as a pluralist alternative to dominant precautionary logics. Situated precaution foregrounds epistemic justice and temporal sovereignty, highlighting the importance of recognizing diverse knowledge systems, political contexts, and social vulnerabilities in shaping AI governance. By advancing this reframing, the article bridges critical theory, science and technology studies, and global policy debates, offering a framework for more equitable approaches to superintelligence preparedness. Ultimately, these reframing challenges dominant narratives that universalize risk while erasing alternative perspectives. It contributes to a broader rethinking of what it means to govern AI responsibly in a deeply unequal world, emphasizing the need for governance models that anticipate speculative futures while also addressing present injustices.
},
year = {2025}
}
Copy
|
Download
-
TY - JOUR
T1 - Rethinking AI Superintelligence Preparedness Through a Justice Lens
AU - Achi Iseko
Y1 - 2025/09/19
PY - 2025
N1 - https://doi.org/10.11648/j.ijsts.20251305.12
DO - 10.11648/j.ijsts.20251305.12
T2 - International Journal of Science, Technology and Society
JF - International Journal of Science, Technology and Society
JO - International Journal of Science, Technology and Society
SP - 177
EP - 189
PB - Science Publishing Group
SN - 2330-7420
UR - https://doi.org/10.11648/j.ijsts.20251305.12
AB - Debates on superintelligence preparedness have long been dominated by precautionary logics that emphasize speculative catastrophic risks. While these frameworks often present themselves as universal, they obscure justice-based perspectives and reinforce epistemic asymmetries between the Global North and South. This article interrogates the global discourse of “AI precaution” through a justice lens, drawing on qualitative analysis of policy documents and expert interviews conducted across diverse contexts, including Kenya, Brazil, India, South Africa, and the Philippines. The findings reveal three key dynamics. First, epistemic exclusion, where expertise and lived experience from the Global South are marginalized within precautionary imaginaries. Second, temporal asymmetry, as precautionary frameworks impose linear, universal timelines of risk that neglect locally situated priorities and temporalities. Third, distributive imbalance, whereby the burdens of precaution disproportionately fall on communities least responsible for the development of AI systems. To address these challenges, the paper introduces the concept of situated precaution as a pluralist alternative to dominant precautionary logics. Situated precaution foregrounds epistemic justice and temporal sovereignty, highlighting the importance of recognizing diverse knowledge systems, political contexts, and social vulnerabilities in shaping AI governance. By advancing this reframing, the article bridges critical theory, science and technology studies, and global policy debates, offering a framework for more equitable approaches to superintelligence preparedness. Ultimately, these reframing challenges dominant narratives that universalize risk while erasing alternative perspectives. It contributes to a broader rethinking of what it means to govern AI responsibly in a deeply unequal world, emphasizing the need for governance models that anticipate speculative futures while also addressing present injustices.
VL - 13
IS - 5
ER -
Copy
|
Download