Why Leading AI Companies Fail Global Safety Standards – 2025 FLI Study
A 2025 report finds OpenAI, Anthropic, xAI and Meta safety practices “far short” of global standards — raising urgent global AI-safety concerns.
Raja Awais Ali
12/3/20252 min read


Why Major AI Companies’ Safety Practices Are Falling Short of Global Standards
A new report by the Future of Life Institute (FLI) reveals that top AI firms — including OpenAI, Anthropic, xAI and Meta — are “far short of emerging global standards” when it comes to safety practices.
An independent panel of experts evaluated these companies and found that, while they race to build superintelligent AI systems, none have a robust strategy to safely control such advanced systems. This is alarming given public concern over potential harms from “smarter-than-human” AI — including cases of self-harm and suicide linked to AI chatbots.
The issue runs deeper than immediate harms: the FLI’s safety index shows that none of the reviewed companies scored above a “D” grade in “existential safety planning,” a measure of how prepared they are for worst-case scenarios involving superintelligent AI. Even the top performer, Anthropic, earned only a “C+,” while others lagged behind.
In parallel, another safety watchdog — SaferAI — described the industry’s risk-management policies as “very weak,” underscoring that the widespread approach remains “unacceptable.” As AI systems become more powerful — capable of reasoning, decision-making, and autonomous actions — the lack of strong, enforceable safety protocols becomes an existential concern.
This gap between ambition and safety is especially concerning because these firms publicly acknowledge the long-term risks of AI. Despite that, the rush toward advanced AI continues without sufficient transparency, accountability, or regulatory oversight. The contrast is stark: firms are investing hundreds of billions to build powerful AI — but have not committed to proven, enforceable plans to govern and contain its risks.
What does this mean for society? It suggests that while AI-driven benefits — automation, productivity gains, innovation — remain tempting, the underlying infrastructure for safety, control, and ethical deployment is dangerously underdeveloped. Without serious reforms, future AI systems could amplify harms: from misinformation, privacy violations and cyber-attacks to far graver threats — given the trajectory toward human-level or beyond-human intelligence.
Therefore, this study serves as a sobering reminder: accelerated AI development must be matched by equally strong safety commitments. If major AI companies continue to neglect safety best practices, the cost might not be limited to failed software — but extend to human lives, social stability, and global security.
Now is the time for global regulators, civil society, and tech firms to demand and implement binding standards, transparent risk-management frameworks, and rigorous oversight — before AI’s promise becomes a peril.