Abstract:
This study investigates the capability of small reasoning-oriented language models to construct analytical solutions to differential equations. Computational experiments are conducted on models such as DeepSeek-R1-Distill-Qwen-1.5B, Qwen2.5-1.5B, and Open-Reasoner-Zero-1.5B. To extract the final answers from the models' reasoning processes, post-processing is applied using two additional language models, Qwen2.5:latest and Llama3.2: latest. The extracted solutions are then compared with reference solutions using the BLEU metric. Our results demonstrate that, on average, Open-Reasoner-Zero-1.5B achieves superior performance, reaching the highest BLEU score (0.978) for second-order homogeneous equations.
Keywords:small language models, differential equations, DeepSeek-R1-Distill-Qwen-1.5B, Qwen2.5-1.5B, and Open-Reasoner-Zero-1.5B.