January 9, 2025 11:01 AM
Credit: VentureBeat made with ChatGPT
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Microsoft is doubling down on the potential of small language models (SLMs) with the unveiling of rStar-Math, a new reasoning technique that can be applied to small models to boost their performance on math problems with reasoning techniques — similar to, and in some cases exceeding — the performance of OpenAI’s o1-preview model.
While still in a research phase — as outlined in a paper published on pre-review site arXiv.org and credited to eight authors at Microsoft or Peking University and Tsinghua University in China — the technique was applied to several different smaller open source models including Microsoft’s own Phi-3 mini, Alibaba’s Qwen-1.5B (a 1.5-billion parameter model), and Qwen-7B (a 7-billion parameter model), and showed improved performance on all of them, even exceeding OpenAI’s previously most advanced model at the MATH (word problem solving) third-party benchmark of 12,500 questions covering various branches such as geometry and algebra, and all levels of difficulty.
...