I'm excited to share the Massive Legal Embedding Benchmark (MLEB) — the first comprehensive benchmark for legal embedding models.
Unlike previous legal retrieval datasets, MLEB was created by someone with actual domain expertise (I have a law degree and previously led the AI team at the Attorney-General's Department of Australia).
I came up with MLEB while trying to train my own state-of-the-art legal embedding model. I found that there were no good benchmarks for legal information retrieval to evaluate my model on.
That led me down a months-long process working alongside my brother to identify or, in many cases, build our own high-quality legal evaluation sets.
The final product was 10 datasets spanning multiple jurisdictions (the US, UK, Australia, Singapore, and Ireland), document types (cases, laws, regulations, contracts, and textbooks), and problem types (retrieval, zero-shot classification, and QA), all of which have been vetted for quality, diversity, and utility.
For a model to do well at MLEB, it needs to have both extensive legal domain knowledge and strong legal reasoning skills. That is deliberate — given just how important high-quality embeddings are to legal RAG (particularly for reducing hallucinations), we wanted our benchmark to correlate as strongly as possible with real-world usefulness.
The dataset we are most proud of is called Australian Tax Guidance Retrieval. It pairs real-life tax questions posed by Australian taxpayers with relevant Australian Government guidance and policy documents.
We constructed the dataset by sourcing questions from the Australian Taxation Office's community forum, where Australian taxpayers ask accountants and ATO officials their tax questions.
We found that, in most cases, such questions can be answered by reference to government web pages that, for whatever reason, users were unable to find themselves. Accordingly, we manually went through a stratified sample of 112 challenging forum questions and extracted relevant portions of government guidance materials linked to by tax experts that we verified to be correct.
What makes the dataset so valuable is that, unlike the vast majority of legal information retrieval evaluation sets currently available, it consists of genuinely challenging real-world user-created questions, rather than artificially constructed queries that, at times, diverge considerably from the types of tasks embedding models are actually used for.
Australian Tax Guidance Retrieval is just one of several other evaluation sets that we painstakingly constructed ourselves simply because there weren't any other options.
We've contributed everything, including the code used to evaluate models on MLEB, back to the open-source community.
Our hope is that MLEB and the datasets within it will hold value long into the future so that others training legal information retrieval models won't have to detour into building their own "MTEB for law".
If you'd like to head straight to the leaderboard instead of reading our full announcement, you can find it here: https://isaacus.com/mleb
If you're interested in playing around with our model, which happens to be ranked first on MLEB as of 16 October 2025 at least, check out our docs: https://docs.isaacus.com/quickstart