SPC: Evolving Self-Play Critic via Adversarial Games for LLM Reasoning

1The University of Hong Kong, 2Tencent, 3Tsinghua University 4MBZUAI

Abstract

Evaluating the step-by-step reliability of large language model (LLM) reasoning, such as Chain-of-Thought, remains challenging due to the difficulty and cost of obtaining high-quality step-level supervision. In this paper, we introduce Self-Play Critic (SPC), a novel approach where a critic model evolves its ability to assess reasoning steps through adversarial self-play games, eliminating the need for manual step-level annotation. SPC involves fine-tuning two copies of a base model to play two roles, namely a "sneaky generator" that deliberately produces erroneous steps designed to be difficult to detect, and a "critic" that analyzes the correctness of reasoning steps. These two models engage in an adversarial game in which the generator aims to fool the critic, while the critic model seeks to identify the generator's errors. Using reinforcement learning based on the game outcomes, the models iteratively improve; the winner of each confrontation receives a positive reward and the loser receives a negative reward, driving continuous self-evolution. Experiments on three reasoning process benchmarks (ProcessBench, PRM800K, DeltaBench) demonstrate that our SPC progressively enhances its error detection capabilities (e.g., accuracy increases from 70.8% to 77.7% on ProcessBench) and surpasses strong baselines, including distilled R1 model. Furthermore, applying SPC to guide the test-time search of diverse LLMs significantly improves their mathematical reasoning performance on MATH500 and AIME2024, outperforming state-of-the-art process reward models.


Introduction

introduction

We continuously generate reinforcement training samples for the critic through adversarial games. The sneaky generator aims to create subtle erroneous steps to challenge the critic, while the critic must accurately distinguish between correct and incorrect steps from a mixed input of them. Benefiting from the opposing optimization objectives, both models can evolutionally learn from each other, akin to how humans improve their skills in board games through competition.


Method

pipeline

The framework of our proposed SPC. We randomly select a correct step along with the partial solution before that step and feed them into the sneaky generator, which first selects one of the predefined error types and then converts the correct step into an incorrect step. The successfully generated incorrect step is then fed to the critic for error detection. If the critic successfully identifies the error, it receives a reward of +1, while the sneaky generator incurs a reward of -1. If the critic is deceived, the critic and sneaky generator are rewarded -1 and +1, respectively.


Results

table1
table2
table3
table4