Text Ranking
Transformers
Safetensors
sentence-transformers
qwen3_vl
image-text-to-text
multimodal rerank
text rerank
Instructions to use Qwen/Qwen3-VL-Reranker-2B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen3-VL-Reranker-2B with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-Reranker-2B") model = AutoModelForImageTextToText.from_pretrained("Qwen/Qwen3-VL-Reranker-2B") - sentence-transformers
How to use Qwen/Qwen3-VL-Reranker-2B with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("Qwen/Qwen3-VL-Reranker-2B") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores) - Notebooks
- Google Colab
- Kaggle
add model pipeline info and technical report info
Browse files
README.md
CHANGED
|
@@ -1,10 +1,15 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
| 3 |
base_model:
|
| 4 |
- Qwen/Qwen3-VL-2B-Instruct
|
|
|
|
| 5 |
tags:
|
| 6 |
- transformers
|
| 7 |
- multimodal rerank
|
|
|
|
| 8 |
---
|
| 9 |
# Qwen3-VL-Reranker-2B
|
| 10 |
|
|
@@ -34,7 +39,7 @@ While the Embedding model generates high-dimensional vectors for broad applicati
|
|
| 34 |
- Number of Parameters: 2B
|
| 35 |
- Context Length: 32k
|
| 36 |
|
| 37 |
-
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwen.ai/blog?id=qwen3-vl-embedding), [GitHub](https://github.com/QwenLM/Qwen3-VL-Embedding).
|
| 38 |
|
| 39 |
## Qwen3-VL-Embedding and Qwen3-VL-Reranker Model list
|
| 40 |
|
|
@@ -214,7 +219,7 @@ If you find our work helpful, feel free to give us a cite.
|
|
| 214 |
@article{qwen3vlembedding,
|
| 215 |
title={Qwen3-VL-Embedding and Qwen3-VL-Reranker: A Unified Framework for State-of-the-Art Multimodal Retrieval and Ranking},
|
| 216 |
author={Li, Mingxin and Zhang, Yanzhao and Long, Dingkun and Chen Keqin and Song, Sibo and Bai, Shuai and Yang, Zhibo and Xie, Pengjun and Yang, An and Liu, Dayiheng and Zhou, Jingren and Lin, Junyang},
|
| 217 |
-
journal={arXiv},
|
| 218 |
year={2026}
|
| 219 |
}
|
| 220 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: text-ranking
|
| 5 |
+
|
| 6 |
base_model:
|
| 7 |
- Qwen/Qwen3-VL-2B-Instruct
|
| 8 |
+
|
| 9 |
tags:
|
| 10 |
- transformers
|
| 11 |
- multimodal rerank
|
| 12 |
+
- text rerank
|
| 13 |
---
|
| 14 |
# Qwen3-VL-Reranker-2B
|
| 15 |
|
|
|
|
| 39 |
- Number of Parameters: 2B
|
| 40 |
- Context Length: 32k
|
| 41 |
|
| 42 |
+
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [technical report](https://arxiv.org/abs/2601.04720), [blog](https://qwen.ai/blog?id=qwen3-vl-embedding), [GitHub](https://github.com/QwenLM/Qwen3-VL-Embedding).
|
| 43 |
|
| 44 |
## Qwen3-VL-Embedding and Qwen3-VL-Reranker Model list
|
| 45 |
|
|
|
|
| 219 |
@article{qwen3vlembedding,
|
| 220 |
title={Qwen3-VL-Embedding and Qwen3-VL-Reranker: A Unified Framework for State-of-the-Art Multimodal Retrieval and Ranking},
|
| 221 |
author={Li, Mingxin and Zhang, Yanzhao and Long, Dingkun and Chen Keqin and Song, Sibo and Bai, Shuai and Yang, Zhibo and Xie, Pengjun and Yang, An and Liu, Dayiheng and Zhou, Jingren and Lin, Junyang},
|
| 222 |
+
journal={arXiv preprint arXiv:2601.04720},
|
| 223 |
year={2026}
|
| 224 |
}
|
| 225 |
```
|