CodeQwen1.5 is a large language model pretrained on a large amount of code data.

Code 7B

57.9K Pulls Updated 4 weeks ago

Readme

CodeQwen1.5 is based on Qwen1.5. It is trained on 3 trillion tokens of code data. Its major features include:

  • Strong code generation capabilities and competitive performance across a series of benchmarks
  • Support for long context understanding and generation with a maximum context length of 64K tokens
  • Support for 92 coding languages
  • Excellent performance in Text-to-SQL, fixing bugs and other coding use cases.

References

Blog Post

GitHub

HuggingFace