2027 AGI forecast maps a 24-month sprint to human-level AI


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The distant horizon is always murky, the minute details obscured by sheer distance and atmospheric haze. This is why forecasting the future is so imprecise: We cannot clearly see the outlines of the shapes and events ahead of us. Instead, we take educated guesses. 

The newly published AI 2027 scenario, developed by a team of AI researchers and forecasters with experience at institutions like OpenAI and The Center for AI Policy, offers a detailed 2 to 3-year forecast for the future that includes specific technical milestones. Being near-term, it speaks with great clarity about our AI near future.

Informed by extensive expert feedback and scenario planning exercises, AI 2027 outlines a quarter-by-quarter progression of anticipated AI capabilities, notably multimodal models achieving advanced reasoning and autonomy. What makes this forecast particularly noteworthy is both its specificity and the credibility of its contributors, who have direct insight into current research pipelines.

The most notable prediction is that artificial general intelligence (AGI) will be achieved in 2027, and artificial superintelligence (ASI) will follow months later. AGI matches or exceeds human capabilities across virtually all cognitive tasks, from scientific research to creative endeavors, while demonstrating adaptability, common sense reasoning and self-improvement. ASI goes further, representing systems that dramatically surpass human intelligence, with the ability to solve problems we cannot even comprehend.

Like many predictions, these are based on assumptions, not the least of which is that AI models and applications will continue to progress exponentially, as they have for the last several years. As such, it is plausible, but not guaranteed to expect exponential progress, especially as scaling of these models may now be hitting diminishing returns.

Not everyone agrees with these predictions. Ali Farhadi, the CEO of the Allen Institute for Artificial Intelligence, told The New York Times: “I’m all for projections and forecasts, but this [AI 2027] forecast doesn’t seem to be grounded in scientific evidence, or the reality of how things are evolving in AI.” 

However, there are others who view this evolution as plausible. Anthropic co-founder Jack Clark wrote in his Import AI newsletter that AI 2027 is: “The best treatment yet of what ‘living in an exponential’ might look like.” He added that it is a technically astute narrative of the next few years of AI development.” This timeline also aligns with that proposed by Anthropic CEO Dario Amodei, who has said that AI that can surpass humans in almost everything will arrive in the next two to three years. And, Google DeepMind said in a new research paper that AGI could plausibly arrive by 2030.

The great acceleration: Disruption without precedent

This seems like an auspicious time. There have been similar moments like this in history, including the invention of the printing press or the spread of electricity. However, those advances required many years and decades to have a significant impact. 

The arrival of AGI feels different, and potentially frightening, especially if it is imminent. AI 2027 describes one scenario that, due to misalignment with human values, superintelligent AI destroys humanity. If they are right, the most consequential risk for humanity may now be within the same planning horizon as your next smartphone upgrade. For its part, the Google DeepMind paper notes that human extinction is a possible outcome from AGI, albeit unlikely in their view. 

Opinions change slowly until people are presented with overwhelming evidence. This is one takeaway from Thomas Kuhn’s singular work “The Structure of Scientific Revolutions.” Kuhn reminds us that worldviews do not shift overnight, until, suddenly, they do. And with AI, that shift may already be underway.

The future draws near

Before the appearance of large language models (LLMs) and ChatGPT, the median timeline projection for AGI was much longer than it is today. The consensus among experts and prediction markets placed the median expected arrival of AGI around the year 2058. Before 2023, Geoffrey Hinton — one of the “Godfathers of AI” and a Turing Award winner — thought AGI was “30 to 50 years or even longer away.” However, progress shown by LLMs led him to change his mind and said it could arrive as soon as 2028.

image2

There are numerous implications for humanity if AGI does arrive in the next several years and is followed quickly by ASI. Writing in Fortune, Jeremy Kahn said that if AGI arrives in the next few years “it could indeed lead to large job losses, as many organizations would be tempted to automate roles.”

A two-year AGI runway offers an insufficient grace period for individuals and businesses to adapt. Industries such as customer service, content creation, programming and data analysis could face a dramatic upheaval before retraining infrastructure can scale. This pressure will only intensify if a recession occurs in this timeframe, when companies are already looking to reduce payroll costs and often supplant personnel with automation.

Cogito, ergo … AI?

Even if AGI does not lead to extensive job losses or species extinction, there are other serious ramifications. Ever since the Age of Reason, human existence has been grounded in a belief that we matter because we think. 

This belief that thinking defines our existence has deep philosophical roots. It was René Descartes, writing in 1637, who articulated the now-famous phrase: “Je pense, donc je suis” (“I think, therefore I am”). He later translated it into Latin: Cogito, ergo sum.” In so doing, he proposed that certainty could be found in the act of individual thought. Even if he were deceived by his senses, or misled by others, the very fact that he was thinking proved that he existed.

In this view, the self is anchored in cognition. It was a revolutionary idea at the time and gave rise to Enlightenment humanism, the scientific method and, ultimately, modern democracy and individual rights. Humans as thinkers became the central figures of the modern world.

Which raises a profound question: If machines can now think, or appear to think, and we outsource our thinking to AI, what does that mean for the modern conception of the self? A recent study reported by 404 Media explores this conundrum. It found that when people rely heavily on generative AI for work, they engage in less critical thinking which, over time, can “result in the deterioration of cognitive faculties that ought to be preserved.” 

Where do we go from here?

If AGI is coming in the next few years — or soon thereafter — we must rapidly grapple with its implications not just for jobs and safety, but for who we are. And we must do so while also acknowledging its extraordinary potential to accelerate discovery, reduce suffering and extend human capability in unprecedented ways. For example, Amodei has said that “powerful AI” will enable 100 years of biological research and its benefits, including improved healthcare, to be compressed into 5 to 10 years. 

The forecasts presented in AI 2027 may or may not be correct, but they are plausible and provocative. And that plausibility should be enough. As humans with agency, and as members of companies, governments and societies, we must act now to prepare for what may be coming. 

For businesses, this means investing in both technical AI safety research and organizational resilience, creating roles that integrate AI capabilities while amplifying human strengths. For governments, it requires accelerated development of regulatory frameworks that address both immediate concerns like model evaluation and longer-term existential risks. For individuals, it means embracing continuous learning focused on uniquely human skills including creativity, emotional intelligence and complex judgment, while developing healthy working relationships with AI tools that do not diminish our agency.

The time for abstract debate about distant futures has passed; concrete preparation for near-term transformation is urgently needed. Our future will not be written by algorithms alone. It will be shaped by the choices we make, and the values we uphold, starting today.

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.



Source link
Scroll to Top