frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•3m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•6m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•9m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•10m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•10m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•11m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•11m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•12m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•13m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•16m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•19m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•20m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•25m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•26m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•26m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•29m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•32m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•32m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•32m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•32m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•34m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•36m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•38m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•41m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•41m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•41m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•44m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•47m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•50m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
4•Tehnix•50m ago•1 comments
Open in hackernews

Federal court in Colorado fines lawyers for errors caused by use of "AI"

https://archive.org/download/gov.uscourts.cod.215068/gov.uscourts.cod.215068.383.0.pdf
29•1vuio0pswjnm7•7mo ago

Comments

rmunn•7mo ago
I am not a lawyer, but I have picked up a little bit of knowledge of US legal procedures over the years, so let me try to explain this a little for anyone who hasn't read US legal documents before. There is a lengthy set of rules for how lawsuits have to be conducted, called the Federal Rules of Civil Procedure. One of them, rule 11, basically says "Anything you file with the court should be supported by existing law or should have a reasonable argument for why existing law should be modified." This includes citing cases: if you cite a case in your argument, your citation must be correct, and must accurately summarize the case.

As everyone who deals with LLMs should know by now, they can be prone to "hallucinate", or make things up, under certain circumstances. Citations seem especially prone to hallucinations, probably because the text the LLM was trained on has relatively few citations so its "knowledge" base of citations is relatively poor. Not very many Reddit articles or Facebook posts are citing Smith v. Jones, 123 U.S. 456, 789 (2038), after all. And so if lawyers use an LLM to generate the text of a legal document, it is especially important for them to verify the citations in the generated text. First, to ensure that the cases being cited are real cases that really exist, and second, to double-check that the case they're citing actually advances their argument.

Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document. The legal secretary or paralegal may make mistakes, but if the lawyer signs the document, then that lawyer is the person ultimately responsible for any mistakes: it was his or her responsibility to check the document for errors before signing it.

Here, the lawyers used AI to draft a document, checked it for errors, but didn't catch all of the errors, so the document they submitted to the court contained citations to cases that don't exist. US courts have already established in other cases that citations to cases that don't exist are a violation of rule 11 (because cases that don't exist are NOT existing law, obviously). The lawyers in this case did not argue that point. At the top of page 4 there's an exchange where the judge asks Mr. Kachouroff (one of the lawyers involved), "And did you double-check any of these citations once it was run through artificial intelligence?" Mr. Kachouroff replies, "Your Honor, I personally did not check it. I am responsible for it not being checked." He does not try to claim that it wasn't his job to check the document, he admits that it was his job and he failed to do it.

The rest of the document involves the argument by Mr. Kachouroff that he and his colleague (Ms. DeMaster) accidentally submitted the wrong file to the court, submitting the draft instead of the version with the errors corrected. The judge didn't buy their argument, for various reasons, and she fined them $3,000 each, which is similar to what lawyers have been fined in other cases of citing nonexistent cases.

Short version: lawyers who submit legal documents are supposed to check that they're correct. Whether they were created by AI, a legal secretary or paralegal, or a law student interning with the law firm, the lawyer who signed the document is responsible for any mistakes in it. In this case, the lawyers submitted a document full of mistakes, and were fined for not being careful enough and wasting the court's time.

swores•7mo ago
Would the result (a fine of that amount) have been identical had the document been prepared by a paralegal or junior lawyer, who with no use of AI accidentally left in a "John Doe vs I Hope I Can Find A Case Like This" citation? (Or how ever many errors there were in this case.)

i.e. all details same (lawyer saying sorry we submitted wrong version, etc) except that the mistake had been made by a junior person rather than AI?

acoustics•7mo ago
I don't know how they actually do it, but I would imagine that an obvious placeholder citation could be treated less severely than a hallucinated citation. In one, every reader is immediately alerted to the error, similar to a typographical or formal error. In the other, the error goes undetected until/unless someone checks.
southernplaces7•7mo ago
>Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document.

Since this was apparently wortha a news story, the key thing I'm curious about: has the frequency of fine-worthy errors increased with the use of AI, or are such errors just getting more coverage because AI is in the mix as opposed to legal secretaries.

burnt-resistor•7mo ago
Next, on Steve Lehto...

Georgia had problem where a lawyer submitted documents with fictitious case citations. https://youtu.be/6RBQrcp0Lrg

Perhaps the way out is low tolerance for lazy, sloppy malpractice.

We're already that much closer to where a real ruling will include fictitious citations. Perhaps the LexisNexis and Westlaws of the world need to promulgate more toolbars and plugins to automatically check citations in documents for validity.

ProllyInfamous•7mo ago
I am currently involved in a small claims civil action, as the pro se plaintiff.

During my free time, I have attended a few unrelated sessions in my county courthouse... just to see how it's conducted; I also have two attorney brothers (one is an appellate judge) whom have expressed "ProllyInfamous is ranting crazytalk again about LLMs."

It is absolutely incredible to me how little faith I've observed in these situations, e.g. an attorney, unrelated to me, who recently responded "I think you're putting a little bit too much faith in ChatGPT, bruh."

"Everybody, particularly any/all attorneys/judges, should read the SCOTUS end of 2023 report" [0], was my response.

For my own particular case, Perplexity.ai has been absolutely incredible in helping me to formulate my initial complaint, as well as respond and file motions.

tl;dr: LLMs are massively going to help laypeople inundate court procedings.

[0]: https://www.supremecourt.gov/publicinfo/year-end/2023year-en...

>For those who cannot afford a lawyer, AI can help. It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge—all without leaving home. These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system.

>But any use of AI requires caution and humility.

AI discussion starting on page 5 of [0]