
邓振华,来自中国广西的一个普通的程序猿。Contact:dzh188@qq.com
-
Qwen3-Reranker-8B
Alibaba's text reranking model.Qwen3-Reranker-8B has the following features: Model Type: Text Reranking. Supported Languages: 100+ Languages. Number of Paramaters: 8B. Context Length: 32k.
163K Pulls 5 Tags Updated 3 months ago
-
Qwen3-Embedding-0.6B
Alibaba's text embedding model.Qwen3-Embedding-0.6B has the following features: Model Type: Text Embedding Supported Languages: 100+ Languages Number of Paramaters: 0.6B Context Length: 32k Embedding Dimension: Up to 1024, supports user-defined output...
tools thinking21.1K Pulls 2 Tags Updated 3 months ago
-
Qwen3-Embedding-8B
Alibaba's text embedding model.Qwen3-Embedding-8B has the following features: Model Type: Text Embedding Supported Languages: 100+ Languages Number of Paramaters: 8B Context Length: 32k Embedding Dimension: Up to 4096, supports user-defined output...
tools thinking19.2K Pulls 4 Tags Updated 3 months ago
-
Qwen3-Embedding-4B
Alibaba's text embedding model.Qwen3-Embedding-4B has the following features: Model Type: Text Embedding Supported Languages: 100+ Languages Number of Paramaters: 4B Context Length: 32k Embedding Dimension: Up to 2560, supports user-defined output ...
tools thinking10.4K Pulls 4 Tags Updated 3 months ago
-
Qwen3-Reranker-0.6B
Alibaba's text reranking model.Qwen3-Reranker-0.6B has the following features: Model Type: Text Reranking Supported Languages: 100+ Languages Number of Paramaters: 0.6B Context Length: 32k
tools6,557 Pulls 2 Tags Updated 3 months ago
-
Qwen3-Reranker-4B
Alibaba's text reranking model.Qwen3-Reranker-4B has the following features: Model Type: Text Reranking Supported Languages: 100+ Languages Number of Paramaters: 4B Context Length: 32k...
3,854 Pulls 3 Tags Updated 3 months ago
-
bge-reranker-v2-m3
BGE-Reranker-v2-M3 是轻量级重排序模型,基于 BGE-M3-0.5B 架构优化,专为多语言检索任务设计,尤其强化了中英文混合场景下的性能。其核心定位是为RAG流程提供高效的上下文重排序能力。
embedding1,311 Pulls 1 Tag Updated 3 months ago
-
DeepSeek-R1-0528-Qwen3-8B
DeepSeek-R1-0528-Qwen3-8B,包含2个量化模型:Q5_K_M,Q8_0
742 Pulls 2 Tags Updated 3 months ago
-
Qwen3-30B-A3B-Instruct-2507
Qwen3-30B-A3B-Instruct-2507 has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Paramaters (Non-Embedding): 29.9B - Number of Layers
tools thinking574 Pulls 2 Tags Updated 1 month ago
-
Qwen3-14B
包含2个量化版本GGUF:Qwen3-14B-Q5_K_M,Qwen3-14B-Q8_0
315 Pulls 2 Tags Updated 3 months ago
-
ERNIE-4.5-0.3B-PT
This model was converted to GGUF format from baidu/ERNIE-4.5-0.3B-PT using llama.cpp via the ggml.ai's GGUF-my-repo space.
314 Pulls 3 Tags Updated 2 months ago
-
Qwen3-30B-A3B
包含3个量化版本GGUF:Qwen3-30B-A3B-Q8_0,Qwen3-30B-A3B-Q5_K_M,Qwen3-30B-A3B-F16
233 Pulls 3 Tags Updated 3 months ago
-
bge-large-zh-v1.5
一个规模为large的BGE模型,适用于中文任务,版本为v1.5,大小671M。
embedding224 Pulls 1 Tag Updated 3 months ago
-
bce-reranker-base_v1
bce-reranker-base_v1是由网易有道开发的跨语种语义表征算法模型,擅长优化语义搜索结果和语义相关顺序精排,支持中英日韩四门语言,覆盖常见业务领域,支持长package rerank(512~32k)。
embedding163 Pulls 1 Tag Updated 3 months ago
-
ERNIE-4.5-21B-A3B-PT
This model was converted to GGUF format from baidu/ERNIE-4.5-21B-A3B-PT using llama.cpp via the ggml.ai's GGUF-my-repo space.
153 Pulls 1 Tag Updated 1 month ago
-
Qwen3-32B
包含2个量化版本GGUF:Qwen3-32B-Q8_0,Qwen3-32B-Q5_K_M
146 Pulls 2 Tags Updated 3 months ago
-
Qwen3-8B
包含2个量化版本GGUF:Qwen3-8B-Q5_K_M,Qwen3-8B-Q8_0
106 Pulls 2 Tags Updated 3 months ago
-
Dmeta-embedding-zh
Dmeta-embedding 是一款跨领域、跨任务、开箱即用的中文 Embedding 模型,适用于搜索、问答、智能客服、LLM+RAG 等各种业务场景,支持使用 Transformers/Sentence-Transformers/Langchain 等工具加载推理。
embedding100 Pulls 2 Tags Updated 3 months ago
-
Qwen3-4B
包含2个量化版本GGUF:Qwen3-4B-Q5_K_M,Qwen3-4B-Q8_0
91 Pulls 2 Tags Updated 3 months ago
-
piccolo-large-zh-v2
piccolo-large-zh-v2是商汤研究院通用模型组开发的中文嵌入embedding模型。
embedding71 Pulls 1 Tag Updated 3 months ago
-
bge-m3
BGE-M3文本向量模型在继承BGE模型优点的基础上,实现了多项技术突破。BGE-M3支持超过100种语言的语义表示及检索任务,具备领先的多语言、跨语言检索能力。支持8192长度。
embedding 567m59 Pulls 1 Tag Updated 3 months ago
-
Conan-embedding-v2
Conan-embedding-v2是腾讯发布的一款 Embedding模型 ,基于其原创训练的Conan-1.4B大语言模型基座 ,在 MTEB榜单 上达到了中英文的SOTA性能,超越了 NVIDIA 、 千问 等更大规模的大模型。