frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Perovskite: The 'wonder material' that could transform solar

https://www.bbc.com/future/article/20251015-perovskite-the-wonder-material-that-could-transform-s...
1•bookofjoe•1m ago•0 comments

AI Podcast Generator

https://sparkpod.ai
1•rogutkuba•6m ago•0 comments

AI Image Extender

https://www.pixextender.com
1•wismoy•6m ago•0 comments

A minimal static site generator built with Vite and the Vento templating engine

https://github.com/riipandi/vitto
1•riipandi•7m ago•1 comments

The rehabilitation of irreversible processes and dissipative structures' (2018)

https://royalsocietypublishing.org/doi/10.1098/rsta.2017.0365
1•measurablefunc•13m ago•0 comments

Read this but think of Mac GPU universal memory running LLM

https://www.apple.com/hk/en/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performan...
1•ngcc_hk•23m ago•2 comments

A Solution to the CIA's Kryptos Code Is Found After 35 Years

https://www.yahoo.com/news/articles/solution-cia-kryptos-code-found-184500153.html
1•hyrix•23m ago•1 comments

Japan asks OpenAI to keep Sora 2's hands off anime IP

https://www.theregister.com/2025/10/15/japan_openai_copyrighted_anime/
3•maxloh•32m ago•0 comments

A Timeline of the Rise and Fall of George Santos

https://www.nytimes.com/2025/04/25/nyregion/george-santos-timeline.html
2•seattle_spring•36m ago•1 comments

Anatomy of a Crypto Meltdown

https://www.citationneeded.news/anatomy-of-a-crypto-meltdown/
1•CharlesW•40m ago•0 comments

Non-Book Review Contest 2025 Winners

https://www.astralcodexten.com/p/non-book-review-contest-2025-winners
1•paulpauper•51m ago•0 comments

Eliezer Yudkowsky Talks About AI Risk on the Ezra Klein Show [video]

https://www.youtube.com/watch?v=2Nn0-kAE5c0
1•paulpauper•52m ago•0 comments

Show HN: ServiceRadar – open-source Network Observability Platform

https://github.com/carverauto/serviceradar
3•carverauto•53m ago•0 comments

Bzfs for subsecond ZFS snapshot replication frequency at fleet scale

https://github.com/whoschek/bzfs
2•werwolf•56m ago•0 comments

Project Pigeon

https://bennaddaffhafrey.substack.com/p/project-pigeon
1•bookofjoe•1h ago•0 comments

We organized the Rust Clippy feature freeze

https://blog.goose.love
1•Bogdanp•1h ago•0 comments

Google Pulls the Plug on Topics, Paapi and Other Major Privacy Sandbox APIs

https://www.adexchanger.com/privacy/google-pulls-the-plug-on-topics-paapi-and-other-major-privacy...
1•typeiierror•1h ago•0 comments

Samsung Reportedly Gives Up on Super Thin Smartphones Amid Low Sales

https://www.macrumors.com/2025/10/17/samsung-reportedly-gives-up-on-thin-smartphones/
1•mgh2•1h ago•0 comments

Frequently young adults use cannabis may predict their binge drinking

https://medicalxpress.com/news/2025-09-frequently-young-adults-cannabis-binge.html
2•PaulHoule•1h ago•0 comments

Apple Said to Cut iPhone Air Production Amid Underwhelming Sales

https://www.macrumors.com/2025/10/17/iphone-air-production-to-be-cut-amid-lower-sales/
3•mgh2•1h ago•0 comments

What is mirror life? Scientists are sounding the alarm

https://www.cnn.com/2025/10/17/science/mirror-cell-life-dangers
2•rramadass•1h ago•3 comments

Show HN: AI code generator with 93% validation success (Python)

https://bauform-engine.fly.dev/
2•tekodu•1h ago•0 comments

Making complex JSON 58x faster, use 3,300x less memory, in ClickHouse

https://clickhouse.com/blog/json-data-type-gets-even-better
1•zX41ZdbW•1h ago•1 comments

A Look at the Robot Operating System

https://lwn.net/Articles/1031669/
1•pykello•1h ago•0 comments

A Handyman's Wife Helped an Hermès Heir Discover He'd Lost $15B

https://www.wsj.com/finance/hermes-nicolas-puech-eric-freymond-fraud-investigation-cc8f017f
1•mfiguiere•1h ago•0 comments

Ask HN: What do you think of Hetzner vs. AWS vs. digitalocean vs. Heroku?

1•jerawaj740•1h ago•1 comments

The Unix Executable as a Smalltalk Method (and Unix-Smalltalk Unification) [pdf]

https://programmingmadecomplicated.wordpress.com/wp-content/uploads/2025/10/onward25-jakubovic.pdf
24•pcfwik•1h ago•3 comments

Video: Intro to Unix – Uni Melbourne Department of Comp Sci Training (1982) [video]

https://www.youtube.com/watch?v=QHBN9vwUmkY
1•greazy•1h ago•0 comments

Technological Optimism and Appropriate Fear

https://importai.substack.com/p/import-ai-431-technological-optimism
1•RyanShook•1h ago•0 comments

Wikipedia Volunteers Avert Tragedy by Taking Down Gunman at Conference

https://www.nytimes.com/2025/10/17/nyregion/wikipedia-conference-gunman.html
7•tptacek•1h ago•0 comments
Open in hackernews

Show HN: The Massive Legal Embedding Benchmark (MLEB)

https://huggingface.co/blog/isaacus/introducing-mleb
10•ubutler•21h ago
Hey HN,

I'm excited to share the Massive Legal Embedding Benchmark (MLEB) — the first comprehensive benchmark for legal embedding models.

Unlike previous legal retrieval datasets, MLEB was created by someone with actual domain expertise (I have a law degree and previously led the AI team at the Attorney-General's Department of Australia).

I came up with MLEB while trying to train my own state-of-the-art legal embedding model. I found that there were no good benchmarks for legal information retrieval to evaluate my model on.

That led me down a months-long process working alongside my brother to identify or, in many cases, build our own high-quality legal evaluation sets.

The final product was 10 datasets spanning multiple jurisdictions (the US, UK, Australia, Singapore, and Ireland), document types (cases, laws, regulations, contracts, and textbooks), and problem types (retrieval, zero-shot classification, and QA), all of which have been vetted for quality, diversity, and utility.

For a model to do well at MLEB, it needs to have both extensive legal domain knowledge and strong legal reasoning skills. That is deliberate — given just how important high-quality embeddings are to legal RAG (particularly for reducing hallucinations), we wanted our benchmark to correlate as strongly as possible with real-world usefulness.

The dataset we are most proud of is called Australian Tax Guidance Retrieval. It pairs real-life tax questions posed by Australian taxpayers with relevant Australian Government guidance and policy documents.

We constructed the dataset by sourcing questions from the Australian Taxation Office's community forum, where Australian taxpayers ask accountants and ATO officials their tax questions.

We found that, in most cases, such questions can be answered by reference to government web pages that, for whatever reason, users were unable to find themselves. Accordingly, we manually went through a stratified sample of 112 challenging forum questions and extracted relevant portions of government guidance materials linked to by tax experts that we verified to be correct.

What makes the dataset so valuable is that, unlike the vast majority of legal information retrieval evaluation sets currently available, it consists of genuinely challenging real-world user-created questions, rather than artificially constructed queries that, at times, diverge considerably from the types of tasks embedding models are actually used for.

Australian Tax Guidance Retrieval is just one of several other evaluation sets that we painstakingly constructed ourselves simply because there weren't any other options.

We've contributed everything, including the code used to evaluate models on MLEB, back to the open-source community.

Our hope is that MLEB and the datasets within it will hold value long into the future so that others training legal information retrieval models won't have to detour into building their own "MTEB for law".

If you'd like to head straight to the leaderboard instead of reading our full announcement, you can find it here: https://isaacus.com/mleb

If you're interested in playing around with our model, which happens to be ranked first on MLEB as of 16 October 2025 at least, check out our docs: https://docs.isaacus.com/quickstart