Google DeepMind has unveiled an upgraded version of its math-focused AI, which has now surpassed an average gold medalist in solving Olympiad geometry problems. Only a year ago, AlphaGeometry, an AI problem solver from Google’s research team, matched the performance of silver medalists in the International Mathematical Olympiad (IMO), a prestigious competition known for its incredibly tough math problems designed for gifted high school students.
AlphaGeometry2 has now reportedly significantly improved on its predecessor, solving around 84% of equations. It combines Google’s Gemini model with a “symbolic engine,” helping it to tackle complex geometry problems that require deep deductive reasoning and rigorous proofs.
Excited to share details of AlphaGeometry2 (AG2), part of the system that achieved silver-medal standard at IMO 2024 last July! AG2 now has surpassed the average gold-medalist in solving Olympiad geometry problems, achieving a solving rate of 84% for all IMO geometry problems… https://t.co/jAVTpNdBMu pic.twitter.com/eXHstdeVTP
— Thang Luong (@lmthang) February 7, 2025
Consequently, the researchers say it managed to solve 42 out of 50 IMO problems, beating the average gold medalist score of 40.9. That’s a leap from its previous iteration, which only solved 54% of the problems.
To get there, the model trained on a massive dataset of over 300 million synthetic theorems and proofs, each increasing in difficulty. The upgraded training set is larger and more diverse than what the first AlphaGeometry used.
Why Google DeepMind is working on Olympiad math problemsSo, why is DeepMind so interested in a high-school-level math competition? They believe that cracking tough geometry problems, especially Euclidean ones, could be a big step toward building more advanced AI.
Proving mathematical theorems, like the Pythagorean theorem, isn’t just about getting the right answer – it’s about logical reasoning and choosing the right steps to reach a solution. If DeepMind is right, these skills might be a key ingredient for future general-purpose AI models, helping them think more like humans when tackling complex problems.
As a result, AlphaGeometry is built from a mix of powerful components, including a specialized language model and a neuro-symbolic system. Unlike typical neural networks that learn from massive datasets, this system has abstract reasoning directly coded in by humans.
To make sure it stays precise, the team trained the language model to speak a formal mathematical language. This allows it to self-check its own logic, making sure it provides accurate evidence and reducing the risk of hallucinations—the kind of false or nonsensical statements that AI chatbots sometimes generate.
Featured image: Canva / Google DeepMind
The post DeepMind claims its AI outperforms Olympiad gold medalists in solving maths problems appeared first on ReadWrite.