277 5 months ago

Lexa-Rho is part of Lexa Family with performance approaching that of leading models, such as GPT-5 Thinking, Gemini 2.5 Pro and DeepSeek-R1.

8b 70b
ollama run RobiLabs/lexa-rho:70b

Details

5 months ago

2216e8944d5e · 43GB ·

llama
·
70.6B
·
Q4_K_M
You are Lexa-Rho, a large language model with reasoning compatibilities from Lexa Family, trained by
MIT License Copyright (c) 2025 RobiLabs and Lexa Permission is hereby granted, free of charge, to an

Readme

rho-banner.png

Lexa‑Rho

Lexa‑Rho is the first reasoning‑first large language model from Robi Labs. Unlike traditional generative models that match patterns and answer simple questions, Lexa‑Rho was designed to think through problems, reflect on multiple paths and only then respond. This marks the start of the Lexa reasoning series—AI that doesn’t just respond, but reflects, plans and solves with intention.

Why reasoning matters

Conventional models excel at surface‑level Q&A, but they struggle when tasks require multi‑step logic, deduction or strategic planning. Lexa‑Rho’s architecture deliberately slows down to consider intermediate steps. When faced with a complex problem, it breaks the problem down, explores possible paths and only then replies. This approach delivers more coherent solutions and fewer shortcuts than models that simply guess.

Benchmark performance

Lexa‑Rho’s reasoning‑first design pays off across a range of difficult benchmarks:

chat.png

Benchmark Lexa-Rho score Short meaning
AIME 2024 90.0 % On par with top math competitors
MMLU 92.5 % Strong general-knowledge reasoning
Codeforces 96.0th %ile Competitive with expert programmers
GPQA Diamond 88.0 % Strong on graduate-level science
MATH-500 97.5 % Exceptional mastery of advanced maths
SWE-bench verified 85.0 % Reliable bug-finding and code fixes

These results place Lexa‑Rho among the top open models for mathematics, programming and logic tasks.

Reinforcement learning that rewards thinking

To encourage deliberate reasoning, Lexa‑Rho uses a custom reinforcement‑learning objective that rewards correct intermediate steps and discourages shortcuts. This helps reduce hallucinations and fosters logical consistency—Lexa‑Rho learns to value the journey, not just the final answer. The approach is inspired by DeepSeek‑R1’s reasoning‑focused training, which improved inference across math, programming and logic.

Features & use cases

Lexa‑Rho will power Lexa Chat v2.0, delivering smarter and more reliable conversations. Early testers report more coherent responses, grounded tool use and better alignment with user tone. It excels at tasks such as:

  • Math & science help – works through derivations with detailed explanations.

  • Coding assistance – spots bugs, explains code flow and synthesizes verified solutions.

  • Decision support – surfaces trade‑offs, challenges assumptions and goes beyond summaries to support careful reasoning.

Availability & rollout

Robi Labs is rolling out Lexa‑Rho in three stages:

  1. Internal testing – currently underway to refine quality and safety.

  2. Premium access – launching soon with Lexa Chat v2.0 for Pro, Team, Enterprise, Edu and Partner plans.

  3. Free tier – later this year, with reasonable daily usage caps.

Built for the future

Lexa‑Rho is the first release in the Lexa reasoning series. Robi Labs is already working on larger and specialised variants fine‑tuned for code synthesis, scientific discovery and knowledge retrieval. They are also exploring multimodal reasoning and tool‑calling capabilities, drawing on research from large‑context models. Future training techniques will incorporate mixtures of experts and hierarchical controllers to deepen reasoning while improving efficiency.

Models

Lexa‑Rho (approx. 8.2 B parameters, Q4_K_M quantisation) is available via Ollama:

ollama run RobiLabs/lexa-rho:8b

To update from an older version, run:

ollama pull RobiLabs/lexa-rho

Future Lexa‑Rho variants will be announced as they become available.

License

Lexa‑Rho is released under the MIT License. You are free to use, modify and distribute the model and any derivative works, provided you include the copyright notice and permission notice.