Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens.

tools 7b 14b

13.8K 3 weeks ago

Readme

This is an uncensored version of Qwen/qwen25-1m created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

References

HuggingFace