Rex-Omni 开始
Rex-Omni 是一个 3B 参数多模态模型,它将视觉感知任务统一到一个“下一点预测”框架中。
其支持的任务有:物体检测、OCR、指向、关键点定位、视觉提示。
官方 README 有详细说明,包括各任务的示例。以下是个人的实践手记 ✌️
环境
准备 Conda 环境,
conda create -n rexomni python=3.10 -y
conda activate rexomni
# Install PyTorch (CPU version)
pip install torch torchvision
# Install PyTorch with CUDA (version <= nvidia-smi shown)
# https://pytorch.org/get-started/locally
pip install torch==2.7.0 torchvision --index-url https://download.pytorch.org/whl/cu128
准备 Rex-Omni,
git clone --depth 1 https://github.com/IDEA-Research/Rex-Omni.git
cd Rex-Omni
pip install -r requirements.txt
pip install -v -e .
如遇 flash-attn 安装错误,
# 直接安装预编译的 flash-attn
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.7cxx11abiTRUE-cp310-cp310-linux_x86_64.whl
pip install ./flash_attn-*.whl
# 或编译安装 flash-attn
# https://github.com/dao-ailab/flash-attention
conda install -c nvidia cuda=12.8
# pip install -U pip setuptools
pip install packaging psutil ninja
MAX_JOBS=4 pip install flash-attn --no-build-isolation
# 检查 flash-attn 版本(注意版本要求)
# Rex-Omni: flash-attn==2.7.4.post1
# xformers: flash-attn>=2.7.1,<=2.7.4
python -c "import flash_attn; print(flash_attn.__version__)"
推理
# Use model: Rex-Omni-AWQ, not Rex-Omni
# vLLM params adjusted to reduce HBM usage
HF_ENDPOINT=https://hf-mirror.com python practice/Rex-Omni/infer_awq.py
# HF_ENDPOINT=https://hf-mirror.com python practice/Rex-Omni/infer.py
# Notice:
# Cannot use FlashAttention-2 backend for Volta and Turing GPUs
代码,
结果,

训练
结语
Let's Go Coding ~