tools

5 4 months ago

Readme

Model Card for Fine-tuned Llama-3.1 8B on SWE Bench Dataset

Model Overview

This model has been fine-tuned on the swe_bench dataset to automate the generation of bug fixes in software engineering tasks. It leverages issue descriptions, code diffs, and historical bug context to generate precise patches. The primary use case is to assist developers by quickly generating code fixes based on detailed bug descriptions.

Key Features:

  • Patch Generation: Produces code patches based on issue descriptions and optional context.
  • Contextual Fixes: Uses code diffs, bug reports, and PR history for more accurate bug fixes.
  • Assertions: Ensures generated patches adhere to certain conditions.

Intended Use

The model is designed for developers and software teams to automatically generate code patches for software issues. It can handle a variety of inputs such as issue descriptions and additional context and is ideal for teams dealing with frequent bug reports.

Inputs:

  1. Issue Description (): Main input, taken from fix_issue_description.
  2. Issue Story (): Optional additional context, from fix_story.
  3. Assertions (): Conditions that the patch must meet, from fix_assertion_1, fix_assertion_2, etc.
  4. Bug and PR Context (): Historical bug and PR context from fields like bug_pr, bug_story, etc., used during fine-tuning but not required for inference.
  5. File Paths and Code Differences (, , ): Paths and diffs used to generate a valid code patch.

Outputs:

  • Generated Code Patch: Based on input issue description and additional context.
  • Assertions: Conditions to ensure that the patch meets the desired criteria.

Dataset

Dataset: swe_bench-lite

This model is fine-tuned on the swe_bench dataset. The dataset includes: - (Issue Description): Describes the bug in detail. - (Issue Story): Provides additional narrative or context around the bug. - : Related to the fix to ensure certain conditions are met. - (Bug and PR Context): Provides historical context on bugs, used in fine-tuning. - , , : File paths and code diffs used to train the model to generate fixes.

Limitations

  • The model relies on well-defined issue descriptions to produce accurate patches.
  • The contextual data from past bug reports is leveraged during fine-tuning and may not generalize to all types of bug fixes.

Usage

from transformers import LlamaForCausalLM, LlamaTokenizer

# Load the model and tokenizer
model = LlamaForCausalLM.from_pretrained('gtandon/ft_llama3_1_swe_bench')
tokenizer = LlamaTokenizer.from_pretrained('gtandon/ft_llama3_1_swe_bench')

# Example input (issue description)
issue_description = "Function X throws an error when Y happens."
inputs = tokenizer(issue_description, return_tensors="pt")

# Generate a patch
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))

Ethical Considerations

  • The generated patches should always be validated and reviewed by developers before deploying in production.
  • This model is designed to assist but should not replace thorough code reviews.

Citation

Please cite this model as follows:

@inproceedings{tandon_ft_llama3_1_swe_bench,
  title={Automatic Bug Fix Generation using Llama-3.1 and SWE Bench Dataset},
  author={Gauransh Tandon, Caroline Lemiuex, and Reid Holmes},
  year={2024},
  publisher={Hugging Face}
}