Aneesh Jain, Sindu Tipirneni, Chandan Reddy

Abstract

The utilization of programming language (PL) models, pre-trained on large-scale code corpora, as a means of automating software engineering processes has demonstrated considerable potential in streamlining various code generation tasks such as code completion, code translation, and program synthesis. However, current approaches mainly rely on supervised fine-tuning objectives borrowed from text generation, neglecting unique sequence-level characteristics of code, including but not limited to compilability as well as syntactic and functional correctness. To address this limitation, we propose PPOCoder, a new framework for code generation that synergistically combines pre-trained PL models with Proximal Policy Optimization (PPO) which is a widely used deep reinforcement learning technique. By utilizing non-differentiable feedback from code execution and structure alignment, PPOCoder seamlessly integrates external code-specific knowledge into the model optimization process. It's important to note that PPOCoder is a task-agnostic and model-agnostic framework that can be used across different code generation tasks and PLs. Extensive experiments on three code generation tasks demonstrate the effectiveness of our proposed approach compared to SOTA methods, achieving significant improvements in compilation success rates and functional correctness across different PLs.

People

Aneesh Jain


Sindu Tipirneni


Chandan Reddy


Publication Details

Date of publication:
July 19, 2023
Journal:
Cornell University
Publication note:

Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, Chandan K. Reddy: Execution-based Code Generation using Deep Reinforcement Learning. CoRR abs/2301.13816 (2023)