[Paper Reading] Understanding the Limitations of Mathematical Reasoning in Large Language Models

Event details
[Paper Reading] Understanding the Limitations of Mathematical Reasoning in Large Language Models event
🎥 This event will be recorded.
Event description

Apple recently released a paper around new benchmark (GSM-Symbolic) for assessing Mathematical reasoning in LLMs.

The GSM8K benchmark is widely used to assess the mathematical reasoning of models on grade-school-level questions. While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics. To address these concerns, we conduct a largescale study on several state-of-the-art open and closed models. To overcome the limitations of existing evaluations, we introduce GSM-Symbolic, an improved benchmark created from symbolic templates that allow for the generation of a diverse set of questions. GSM-Symbolic enables more controllable evaluations, providing key insights and more reliable metrics for measuring the reasoning capabilities of models.Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question