Blog post
Thou Shalt Not Freeze Frame: The rise of catastrophic AI
“Scientists are very much entangled in their culture, and this culture is not pristine, untouched by other cultures and practices.”
– Bruno Latour (2011)
“..don’t ask a computer scientist or economist whether you can predict the future. The temptation to say yes often overrides a necessary humility about what can and cannot be predicted accurately” – Sara Hooker, On the Limitations of Compute Thresholds as a Governance Strategy (2024)
Thou Shalt Not Freeze Frame: The rise of catastrophic AI in safety discourse
Judging by the headlines, AI seems to be on the verge of either saving the world or destroying it. Even since the early days of the field, there have been warnings about the risk of catastrophic AI and the potential perils of reaching Artificial General Intelligence (AGI). As the technology has advanced, these narratives have evolved, often drawing parallels to the existential scale of climate change or nuclear disaster. In 2023, the Future of Life Institute called for a six-month pause on training models ‘more powerful than GPT-4’. Later the same year, the Centre for AI Safety published a statement urging that mitigating the risk of AI-induced extinction should be a global priority alongside other catastrophic risks such as pandemics and nuclear threats.
Latour’s warning: The danger of ‘freeze-framing’
Bruno Latour, the philosopher and anthropologist known for his work in actor-network theory and his critiques of modernity, offers a fresh way to think about AI risks. In his essay Thou Shalt Not Freeze Frame, Latour pushes back against the idea that science and religion are locked in opposition – science being rational and universal, religion emotional and abstract. Instead, he suggests that science and religion exist in different worlds of time and meaning. Science strives for permanence and universal truths, while religion focuses on immediate and subjective experiences.
Latour’s central point is that we often ‘freeze’ complex, evolving processes into rigid categories, like turning a dynamic relationship into a snapshot. Science and religion aren’t static forces – they’re shaped by ongoing interactions with the world around them. This critique of ‘freeze-framing’ can also be applied to the way we think about AI risks. We sometimes reduce nuanced issues to simplified, opposing categories like ‘future catastrophe’ versus present-day problems.
• Existential AI risk is analogous to the frozen, abstract, and far off – a ‘freeze frame’ of imagined future scenarios represented by the most advanced models.
• Current systemic AI risk aligns with the dynamic, every day, and actionable – a lens focused on immediate impacts and continuous evolution of context-specific machine learning models.
From distant catastrophes to everyday harms
Rapidly rising AGI-related concerns have made their way into policy debates, implicitly and explicitly influencing draft regulations and reshaping AI safety conversations. The ‘problem of intelligence’ has long been a driving force in the field and AGI continues to be a north star for many research labs. While guarding against catastrophic risk may be a vital part of responsible AI governance, the way it is discussed and defined not only has policy implications but tells a story about the motivations and power dynamics behind how AI harms are prioritized. Policy frameworks need to also be grounded in practical, real-world examples of AI-related harms. These tangible incidents – however inconvenient to the ambition of scaling ever-larger models – can’t be eclipsed by speculative worst-case scenarios.
But where did this catastrophic risk narrative come from? And why has it become the lens through which we so often seem to be framing discussions about AI safety?
The rise of catastrophic AI risk narratives
AI ethics and governance are still relatively young disciplines, although high-profile scandals have pushed them into the spotlight. Cases like the Cambridge Analytica scandal and Amazon’s biased hiring algorithms are well known, but they only scratch the surface. Social media’s role in spreading misinformation during political conflicts, like in the Philippines, and its use in the persecution of the Rohingya minority in Myanmar reveal deep systemic issues. Reports of AI and facial recognition tools being used in harmful contexts, such as surveillance of Uyghurs in Xinjiang also underscore the stakes. More recently, the harsh working conditions of data labellers and content moderators in Kenya highlight the human cost behind AI systems. The LAION-5B dataset, which has been used to train today’s models and was awarded the top ‘Datasets and Benchmarks’ paper in 2022, has faced criticism for including copyrighted and harmful material, sparking concerns about responsible data sourcing.
And yet, the subject of open industry letters and memos released by CEOs of major AI companies overwhelming address catastrophic AI. These memos frame AI’s future as a logical progression – highlighting their overarching good and cross-industry innovations and revolutions. However, they also use catastrophic AI or AGI forecasts to warn against future existential risks. Predictions that machines will soon solve humanity’s greatest problems are typically accompanied by ambitious timelines like ‘in five years’ or ‘by 2026’.
While bold and attention grabbing, these forecasts tend to oversimplify the political, economic and social realities that govern AI development. The language we use to describe AI risks matters. By presenting AI as an inevitable, monolithic force, either as a utopian saviour or an existential threat, the language used in these discussions strips away the complexity and nuances of real-world AI applications. This stark framing positions tech companies as gatekeepers of humanity’s fate, reinforcing their power while diverting attention from what some industry players dismiss as more ‘mundane’ concerns. Arguments against short-term regulations often portray long-term safety measures as the more responsible approach – though they tend to remain conveniently vague.
The ‘trolley problem’ of AI governance
Ian Bogost once wrote that we need to ‘retire the trolley problem’ when discussing ethical dilemmas in technology. By leaning on abstract and overused thought experiments, we risk losing sight of the messy, human realities at stake. The same critique applies to AI safety: when discussions revolve around hypothetical AGI-driven disasters, they can flatten the conversation, making it harder to see the ethical challenges that arise from the day-to-day use of AI systems.
The incentive problem: Who defines AI safety?
The backdrop to catastrophic AI and AGI is the competitive landscape within the current AI industry. The exorbitant cost associated with training LLMs have sparked a new race among firms, all vying for sustainable business models while striving to remain at the cutting edge of technology. This race is currently defined by the pursuit of scale and compute power, with access to high-performance computing becoming a critical dynamic. Tim O’Reillys critique of Blitzscaling’ – a strategy that prioritizes speed and rapid growth over efficiency in order to achieve market dominance – applies to the current AI landscape. Firms are participating in funding rounds, ramping up CapEx and rapidly releasing products to stay competitive and hopefully claim the market.
But is this really what is meant by ‘democratizing AI? And how might these narratives be impacting policy frameworks?
The allure of timelines and thresholds
AI forecasting is not new. Historically, AI winters have been defined by ambitious and mistaken projections about AGI. Back in 1970, Marvin Minsky famously predicted that machines with human-level intelligence would arrive in ‘three to eight years.’ Decades later, we’re still hearing bold claims. Only now, they’re paired with ominous warnings about what happens if AI surpasses certain computational ‘thresholds’.
These thresholds, often defined by floating-point operations per second (FLOPs) or the number of model parameters, are increasingly used as benchmarks for when an AI system might become dangerously powerful. Both metrics give the impression of objectivity and reliability. But Sara Hooker, in her critique of compute-centric governance, points out that this is a red herring. She argues that relying on inflection points of risk is insufficient for effective governance. FLOPs and parameter counts may be easy to measure, but they’re poor proxies for risk. Just because a model passes certain thresholds doesn’t mean it’s inherently more dangerous than smaller counterparts.
Although future-focused policies are essential for regulating fast-moving technologies, using model size as a benchmark for the most severe risks places too much emphasis on anticipating rare ‘black swan’ events. This approach often overlooks the more gradual buildup and amplification of existing AI risks that unfold in real time. This preoccupation with scale risks creating a policy environment where regulators focus on theoretical ‘emergent properties’ instead of addressing the more diffuse, data-related issues shaping AI’s real-world impacts.
Ritwik Gupta highlights the presence of Child Sexual Abuse Material (CSAM) in the widely used LAION dataset. He raises concerns about the potential misuse of generative AI models trained on LAION-5B by malicious actors, with documented cases of abuse already occurring. This work underscores the urgent need for robust, data-centric governance practices, as post-training safeguards alone are inadequate. Gupta et al. argue that AI governance frameworks should leverage existing legal structures to streamline oversight and reduce regulatory burdens while addressing these critical issues.
The drive for scale, is also fuelling unprecedented energy demands to train and deploy AI models. The push for bigger and bigger models appears to be in direct conflict with efforts to reduce the environmental impacts of technology. Stakeholders from various technology companies have met with the White House to discuss and advocate for the future power requirements needed to support evolving AI workload demands. However, the rush to build new data centre capacity to meet these demands risks poor planning and potential corner cutting.
Conclusion
Precautionary AI policies should strike a balance between speculative foresight and pragmatic interventions. Latour’s critique of ‘freeze framing’ can be used to inform policy perspective and support dynamic, relational approaches that address both the grand and current systemic risks of AI. In particular, robust data governance and policies informed by real-life examples must take centre stage in shaping AI regulations. While the path ahead is undeniably complex, policymarkers can’t afford to rely on static and oversimplified narratives of existential AI. By shifting the focus from abstract catastrophic scenarios to actionable systemic harms, we can plan a more grounded path forward in AI safety.