618 Downloads Updated 5 months ago
Autonomous agents for software development are already contributing to a wide range of software development tasks. But up to this point, strong coding agents have relied on proprietary models, which means that even if you use an open-source agent like OpenHands, you are still reliant on API calls to an external service.
Today, we are excited to introduce OpenHands LM, a new open coding model that:
Read below for more details and our future plans!
OpenHands LM is built on the foundation of Qwen Coder 2.5 Instruct 32B, leveraging its powerful base capabilities for coding tasks. What sets OpenHands LM apart is our specialized fine-tuning process:
We evaluated OpenHands LM using our latest iterative evaluation protocol on the SWE-Bench Verified benchmark.
The results are impressive:
Here’s how OpenHands LM compares to other leading open-source models:
As the plot demonstrates, our 32B parameter model achieves efficiency that approaches much larger models. While the largest models (671B parameters) achieve slightly higher scores, our 32B parameter model performs remarkably well, opening up possibilities for local deployment that are not possible with larger models.
You can start using OpenHands LM immediately through these channels:
ollama run omercelik/openhands-lm
Download the model from Hugging Face The model is available on Hugging Face and can be downloaded directly from there.
Create an OpenAI-compatible endpoint with a model serving framework For optimal performance, it is recommended to serve this model with a GPU using SGLang or vLLM.
Point your OpenHands agent to the new model Download OpenHands and follow the instructions for using an OpenAI-compatible endpoint.
This initial release marks just the beginning of our journey. We will continue enhancing OpenHands LM based on community feedback and ongoing research initiatives.
In particular, it should be noted that the model is still a research preview, and (1) may be best suited for tasks regarding solving github issues and perform less well on more varied software engineering tasks, (2) may sometimes generate repetitive steps, and (3) is somewhat sensitive to quantization, and may not function at full performance at lower quantization levels. Our next releases will focus on addressing these limitations.
We’re also developing more compact versions of the model (including a 7B parameter variant) to support users with limited computational resources. These smaller models will preserve OpenHands LM’s core strengths while dramatically reducing hardware requirements.
We encourage you to experiment with OpenHands LM, share your experiences, and participate in its evolution. Together, we can create better tools for tomorrow’s software development landscape.
We invite you to be part of the OpenHands LM journey:
By contributing your experiences and feedback, you’ll help shape the future of this open-source initiative. Together, we can create better tools for tomorrow’s software development landscape.
We can’t wait to see what you’ll create with OpenHands LM!