frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hop – Fast SSH connection manager with TUI dashboard

https://github.com/danmartuszewski/hop
1•danmartuszewski•33s ago•1 comments

Turning books to courses using AI

https://www.book2course.org/
1•syukursyakir•2m ago•0 comments

Top #1 AI Video Agent: Free All in One AI Video and Image Agent by Vidzoo AI

https://vidzoo.ai
1•Evan233•2m ago•1 comments

Ask HN: How would you design an LLM-unfriendly language?

1•sph•4m ago•0 comments

Show HN: MuxPod – A mobile tmux client for monitoring AI agents on the go

https://github.com/moezakura/mux-pod
1•moezakura•4m ago•0 comments

March for Billionaires

https://marchforbillionaires.org/
1•gscott•4m ago•0 comments

Turn Claude Code/OpenClaw into Your Local Lovart – AI Design MCP Server

https://github.com/jau123/MeiGen-Art
1•jaujaujau•5m ago•0 comments

An Nginx Engineer Took over AI's Benchmark Tool

https://github.com/hongzhidao/jsbench/tree/main/docs
1•zhidao9•7m ago•0 comments

Use fn-keys as fn-keys for chosen apps in OS X

https://www.balanci.ng/tools/karabiner-function-key-generator.html
1•thelollies•8m ago•1 comments

Sir/SIEN: A communication protocol for production outages

https://getsimul.com/blog/communicate-outage-to-ceo
1•pingananth•9m ago•1 comments

Show HN: OpenCode for Meetings

https://getscripta.app
1•whitemyrat•10m ago•1 comments

The chaos in the US is affecting open source software and its developers

https://www.osnews.com/story/144348/the-chaos-in-the-us-is-affecting-open-source-software-and-its...
1•pjmlp•11m ago•0 comments

The world heard JD Vance being booed at the Olympics. Except for viewers in USA

https://www.theguardian.com/sport/2026/feb/07/jd-vance-boos-winter-olympics
42•treetalker•13m ago•6 comments

The original vi is a product of its time (and its time has passed)

https://utcc.utoronto.ca/~cks/space/blog/unix/ViIsAProductOfItsTime
1•ingve•20m ago•0 comments

Circumstantial Complexity, LLMs and Large Scale Architecture

https://www.datagubbe.se/aiarch/
1•ingve•27m ago•0 comments

Tech Bro Saga: big tech critique essay series

1•dikobraz•30m ago•0 comments

Show HN: A calculus course with an AI tutor watching the lectures with you

https://calculus.academa.ai/
1•apoogdk•34m ago•0 comments

Show HN: 83K lines of C++ – cryptocurrency written from scratch, not a fork

https://github.com/Kristian5013/flow-protocol
1•kristianXXI•39m ago•0 comments

Show HN: SAA – A minimal shell-as-chat agent using only Bash

https://github.com/moravy-mochi/saa
1•mrvmochi•39m ago•0 comments

Mario Tchou

https://en.wikipedia.org/wiki/Mario_Tchou
1•simonebrunozzi•40m ago•0 comments

Does Anyone Even Know What's Happening in Zim?

https://mayberay.bearblog.dev/does-anyone-even-know-whats-happening-in-zim-right-now/
1•mugamuga•41m ago•0 comments

The last Morse code maritime radio station in North America [video]

https://www.youtube.com/watch?v=GzN-D0yIkGQ
1•austinallegro•43m ago•0 comments

Show HN: Hacker Newspaper – Yet another HN front end optimized for mobile

https://hackernews.paperd.ink/
1•robertlangdon•44m ago•0 comments

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
3•novoreorx•52m ago•0 comments

Everything you need to know about lasers in one photo

https://commons.wikimedia.org/wiki/File:Commercial_laser_lines.svg
2•mahirsaid•54m ago•0 comments

SCOTUS to decide if 1988 video tape privacy law applies to internet uses

https://www.jurist.org/news/2026/01/us-supreme-court-to-decide-if-1988-video-tape-privacy-law-app...
1•voxadam•55m ago•0 comments

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
3•XzetaU8•1h ago•1 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•1h ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
2•saiyampathak•1h ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
2•tywells•1h ago•0 comments
Open in hackernews

Federal court in Colorado fines lawyers for errors caused by use of "AI"

https://archive.org/download/gov.uscourts.cod.215068/gov.uscourts.cod.215068.383.0.pdf
29•1vuio0pswjnm7•7mo ago

Comments

rmunn•7mo ago
I am not a lawyer, but I have picked up a little bit of knowledge of US legal procedures over the years, so let me try to explain this a little for anyone who hasn't read US legal documents before. There is a lengthy set of rules for how lawsuits have to be conducted, called the Federal Rules of Civil Procedure. One of them, rule 11, basically says "Anything you file with the court should be supported by existing law or should have a reasonable argument for why existing law should be modified." This includes citing cases: if you cite a case in your argument, your citation must be correct, and must accurately summarize the case.

As everyone who deals with LLMs should know by now, they can be prone to "hallucinate", or make things up, under certain circumstances. Citations seem especially prone to hallucinations, probably because the text the LLM was trained on has relatively few citations so its "knowledge" base of citations is relatively poor. Not very many Reddit articles or Facebook posts are citing Smith v. Jones, 123 U.S. 456, 789 (2038), after all. And so if lawyers use an LLM to generate the text of a legal document, it is especially important for them to verify the citations in the generated text. First, to ensure that the cases being cited are real cases that really exist, and second, to double-check that the case they're citing actually advances their argument.

Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document. The legal secretary or paralegal may make mistakes, but if the lawyer signs the document, then that lawyer is the person ultimately responsible for any mistakes: it was his or her responsibility to check the document for errors before signing it.

Here, the lawyers used AI to draft a document, checked it for errors, but didn't catch all of the errors, so the document they submitted to the court contained citations to cases that don't exist. US courts have already established in other cases that citations to cases that don't exist are a violation of rule 11 (because cases that don't exist are NOT existing law, obviously). The lawyers in this case did not argue that point. At the top of page 4 there's an exchange where the judge asks Mr. Kachouroff (one of the lawyers involved), "And did you double-check any of these citations once it was run through artificial intelligence?" Mr. Kachouroff replies, "Your Honor, I personally did not check it. I am responsible for it not being checked." He does not try to claim that it wasn't his job to check the document, he admits that it was his job and he failed to do it.

The rest of the document involves the argument by Mr. Kachouroff that he and his colleague (Ms. DeMaster) accidentally submitted the wrong file to the court, submitting the draft instead of the version with the errors corrected. The judge didn't buy their argument, for various reasons, and she fined them $3,000 each, which is similar to what lawyers have been fined in other cases of citing nonexistent cases.

Short version: lawyers who submit legal documents are supposed to check that they're correct. Whether they were created by AI, a legal secretary or paralegal, or a law student interning with the law firm, the lawyer who signed the document is responsible for any mistakes in it. In this case, the lawyers submitted a document full of mistakes, and were fined for not being careful enough and wasting the court's time.

swores•7mo ago
Would the result (a fine of that amount) have been identical had the document been prepared by a paralegal or junior lawyer, who with no use of AI accidentally left in a "John Doe vs I Hope I Can Find A Case Like This" citation? (Or how ever many errors there were in this case.)

i.e. all details same (lawyer saying sorry we submitted wrong version, etc) except that the mistake had been made by a junior person rather than AI?

acoustics•7mo ago
I don't know how they actually do it, but I would imagine that an obvious placeholder citation could be treated less severely than a hallucinated citation. In one, every reader is immediately alerted to the error, similar to a typographical or formal error. In the other, the error goes undetected until/unless someone checks.
southernplaces7•7mo ago
>Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document.

Since this was apparently wortha a news story, the key thing I'm curious about: has the frequency of fine-worthy errors increased with the use of AI, or are such errors just getting more coverage because AI is in the mix as opposed to legal secretaries.

burnt-resistor•7mo ago
Next, on Steve Lehto...

Georgia had problem where a lawyer submitted documents with fictitious case citations. https://youtu.be/6RBQrcp0Lrg

Perhaps the way out is low tolerance for lazy, sloppy malpractice.

We're already that much closer to where a real ruling will include fictitious citations. Perhaps the LexisNexis and Westlaws of the world need to promulgate more toolbars and plugins to automatically check citations in documents for validity.

ProllyInfamous•7mo ago
I am currently involved in a small claims civil action, as the pro se plaintiff.

During my free time, I have attended a few unrelated sessions in my county courthouse... just to see how it's conducted; I also have two attorney brothers (one is an appellate judge) whom have expressed "ProllyInfamous is ranting crazytalk again about LLMs."

It is absolutely incredible to me how little faith I've observed in these situations, e.g. an attorney, unrelated to me, who recently responded "I think you're putting a little bit too much faith in ChatGPT, bruh."

"Everybody, particularly any/all attorneys/judges, should read the SCOTUS end of 2023 report" [0], was my response.

For my own particular case, Perplexity.ai has been absolutely incredible in helping me to formulate my initial complaint, as well as respond and file motions.

tl;dr: LLMs are massively going to help laypeople inundate court procedings.

[0]: https://www.supremecourt.gov/publicinfo/year-end/2023year-en...

>For those who cannot afford a lawyer, AI can help. It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge—all without leaving home. These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system.

>But any use of AI requires caution and humility.

AI discussion starting on page 5 of [0]