frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

India's top court angry after junior judge cites fake AI-generated orders

https://www.bbc.com/news/articles/c178zzw780xo
112•tchalla•1h ago

Comments

voidUpdate•1h ago
How many of these cases do we have to have before lawyers realise that they need to check that the things an LLM tells them are actually true?
zthrowaway•1h ago
Do we see this a lot in the US? This seems to be more unique to India.
duskdozer•1h ago
today: https://news.ycombinator.com/item?id=47231189
tw04•1h ago
It’s happening A LOT in the US too. Mainstream media just doesn’t seem to find it that newsworthy.

https://arstechnica.com/tech-policy/2026/02/randomly-quoting...

coffeefirst•1h ago
It’s worse than that. We’re hearing about the lawyers and Ars Technica because the consequences are public and the errors are egregious.

It’s likely happening to everyone.

LunaSea•1h ago
It doesn't matter anymore.

LLMs just revealed what a decadent society we have setup for ourselves worldwide.

duskdozer•1h ago
I'm continually amazed at how much faith people have in them. I guess since they can sound like people and output really authoritative and confident text it just overrides any skepticism subconsciously?
pjc50•51m ago
The advertising campaign is incredible.
ben_w•40m ago
Much as I like them, I do frequently remind myself of two things:

1) https://en.wikipedia.org/wiki/Clever_Hans

2) https://archive.org/details/nextgen-issue-26 as an example of how in the 90s we has rapid cycles of a new tech (3d graphics) astounding us with how realistic each new generation was compared to the previous one, and forgetting with each new (game engine) how we'd said the same and felt the same about (graphics) we now regarded as pathetic.

So yes, they do sound "authoritative and confident text it just overrides any skepticism subconsciously", but you shouldn't be amazed, we've always been like this.

moron4hire•37m ago
It's mind boggling how much people claim to like LLMs when you would never design any other piece of software to operate like LLMs do. Designing a system that interact with the user through natural text creates an awful experience. It slows down every interaction as you dig through all the prose to get to the key information. It turns every computer interaction into a school math word problem.
PunchyHamster•30m ago
Yes, just as with politicians. And LLMs have been thoroughly tuned to appear that
Latty•1h ago
It doesn't matter, because any process that seems right most of the time but occasionally is wrong in subtle, hard to spot ways is basically a machine to lull people into not checking, so stuff will always slip through.

It's just like the cars driving themselves but you need to be able to jump in if there is a mistake, humans are not going to react as fast as if they were driving, because they aren't going to be engaged, and no one can stay as engaged as they were when they were doing it themselves.

We need to stop pretending we can tell people they "just" need to check things from LLMs for accuracy, it's a process that inevitably leads to people not checking and things slipping through. Pretending it's the people's fault when essentially everyone using it would eventually end up doing that is stupid and won't solve the core problem.

voidUpdate•52m ago
Probably worth including a "bibliography" section of citations that can be automatically checked that they actually exist then
lazide•40m ago
Not enough - you’d also need to check that they say/mean what is being implied. Which is a real problem.
macintux•44m ago
Even disregarding self driving features, it seems like the smarter we make cars the dumber the drivers are. DRLs are great, until they allow you to drive around all night long with no tail lights and dim front lighting because you’re not paying enough attention to what’s actually turned on.
YeGoblynQueenne•8m ago
What kind of AI is this that you constantly need a human to check its job? Do you think Jean-Luc Piccard had to constantly check the output of the Enterprise computer? No he didn't. If AI is not better than humans, then what the heck is the point? You might as well just use humans.
codegladiator•1h ago
> She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote

I don't think the intention matters here. Its the same deal with every profession using llm to "automate" their work. The onus in on the professional, not the llm. Arstechnica case could have been justified by same manner otherwise.

Not knowing the law isnt execuse to break law, so why is not knowing the tool an excuse to blame the tool.

hypeatei•47m ago
This is why LLMs won't replace humans wholesale in any profession: you can't hold a machine accountable. Most of the chatbot experiences I have with various support channels always end up with human intervention anyway when it involves money.

Maybe true general intelligence would solve these issues, but LLMs aren't meeting that threshold anytime soon, imo. Stochastic parrots won't rule the world.

lazide•41m ago
Even ‘true general intelligence’ (if we count humans as that) screws up frequently, sometimes (often?) intentionally for it’s own benefit - which is why accountability is such a necessary element.

If someone won’t be held liable for the end result at some point, then there is no reason to ensure an even somewhat reasonable end result. It’s fundamental.

Which is also why I suspect so many companies are pushing ‘AI’ so hard - to be able to do unreasonable things while having a smokescreen to avoid being penalized for the consequences.

hypeatei•27m ago
> to be able to do unreasonable things while having a smokescreen

Maybe, but I feel like the calculus remains unchanged for professions that already lack accountability (police, military, C-suite, three letter agencies, etc.); LLMs are yet another tool in their toolbox to obfuscate but they were going to do that anyway.

Peons will continue to face consequences and sanctions if they screw up by using hallucinated output.

the_af•46m ago
They cannot even claim they weren't aware of the danger. LLM hallucinations have been a discussed topic, not some obscure failure mode. Almost every article on problems with AI mentions this.

So the judge was lazy, incompetent, or both.

lukan•39m ago
Not just discussed, but under every chat interface explicitely mentioned "This tool can make misstakes"

(Sure, more honest would be "this tool makes stuff up in a convincing way")

fidotron•40m ago
Using an LLM to automate is simply the newer cheaper outsourcing with much of the same entertainment, but less food poisoning and air travel.

Over the last 20 years a lot of engineering (proper eng, not software) work in the west has been outsourced to cheaper places, with the certified engineers simply signing off on the work done elsewhere. This results in a cycle of doing things ever faster/more cheaply and safeguards disappearing under the pressure to go ever cheaper and faster.

As someone else pointed out, LLMs have just really exposed what a degraded state we have headed into rather than being a cause of it themselves. It's going to be very tough for people with no standards - they'll enjoy cheap stuff for a while and then it will all go away. Surprised Pikachu faces all round.

(I'm pro AI btw, just be responsible.)

kingstnap•36m ago
> excuse to blame the tool

The issue is ultimately blaming people doesn't really solve things. Unless its genuinely a one-of-a-kind case. But if this happened once its probably going to happen again, and this isn't the first such case of LLM hallucinations in law.

It's weird to think this way, because its easy to just point at a person for a specific instance, but when you see something repeat over and over again you need to consider that if your ultimate goal is to stop something from happening you have to adjust the tools even if the people using them were at fault in every case.

boringg•24m ago
Absolutely setting precedent will change peoples behaviors and expectations.

Not holding people accountable is a fools existence.

boringg•25m ago
Aka some kind of political protection has got to be the reason shes still employed.
alansaber•54m ago
This is a big problem in the US and UK too. Lawyers are not technical at all and they need a robust system of governance, since currently they're (directly editing, not even diffing) documents with a chatbot which makes these mistakes inevitable. See https://insights.doughtystreet.co.uk/post/102mi96/38-uk-case...
cmiles8•47m ago
There will be many more things like this and it’s an elephant in the room for the supposed mass replacement of people with AI.

Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

You can make humans more productive but for the foreseeable future you can’t take the human out of the loop to have an AI implementation that’s not a disaster/lawsuit waiting to happen. That, probably more than anything else, is why companies just aren’t seeing the much promised mass step change in productivity from AI and why so many companies are now saying they see zero ROI from AI efforts.

The lowest hanging fruit will be low value rote repetitive tasks like the whole India offshoring industry, which will be the first to vaporize if AI does start replacing humans. But until companies see success on the lowest of lowest hanging fruit on en-mass labor replacement with AI things higher up on the value chain will remain relatively safe.

PS: Nearly every mass layoff recently citing “AI productivity” hasn’t withstood scrutiny. They all seem to be just poorly performing companies slashing staff after overhiring, which management looking for any excuse other than just admitting that.

fidotron•33m ago
> Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.

The turning point will be when threatening an AI with being unplugged for screwing up works in motivating it to stop making things up.

Some people will rightly point out that is kind of what the training process is already. If we go around this loop enough times it will get there.

Hendrikto•26m ago
You are making a lot of assumptions here. You assume, among other things, that AI has self-preservation drive, can be threatened, can be motivated, and above all that we know how to accomplish that and are already doing so. I would dispute all of that.
yes_man•15m ago
For now maybe not. (Maybe).

But just as evolution in nature, isn’t it likely that in the future the AIs that have a preservation drive are the ones that survive and proliferate? Seeing they optimize for their survival and proliferation, and not blindly what they were trained on.

I am not discounting this happening already, not by the LLMs necessarily being sentient but at least being intelligent enough to emulate sentience. It’s just that for now, humanity is in control of what AI models are being deployed.

miningape•5m ago
Do you also happen to think that LLM training involves dog treats and drill exercises?
hek2sch•33m ago
Isn't just the issue stemming simply from not using the right tool? When the stakes are high and you should be checking details, the right tools are grounded Ai solutions like nouswise and notebooklm and not the general purpose chatbots that almost everyone knows they might hallucinate. I also do believe that this use case is definitely a low hanging fruit to automat a lot of manual work but it comes with new requirements like transparency to help with verifying the responses.
edgarvaldes•17m ago
Is this a solved problem using the right tools?
kace91•7m ago
I think this is an even clearer case than usual. With software engineers and office work you don’t have legal limitations on who can perform the work, but they exist for lawyers and doctors for example.

So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.

chris_wot•42m ago
In Australia, our universities are finding that a large proportion of Indian students have been using GenAI for cheating. Often they get away with it. I'm not saying that people other than Indian overseas students cheat, but it does seem more entrenched. I'd love to know why. It doesn't actually help in the long term!
n1b0m•33m ago
Indian students have embraced GenAI at a rate significantly higher than the global average, with nearly 90% of students in some surveys actively using these tools.

Government Policy and National Initiatives: The National Education Policy (NEP 2020) has shifted the focus toward digital literacy. The government has introduced AI as a skill subject for younger grades and launched programs like AI for All to promote nationwide awareness.

love2read•31m ago
In the United States, cheating via AI is now rampant regardless of ethnicity. I know little of Australian Universities but I would assume it’s similar over there.
PunchyHamster•29m ago
I'd imagine they are just being worse at hiding it. GenAI is rampant pretty much everywhere in school system of most countries
DaedalusII•23m ago
>The number of international students studying in Australia totalled 833,041 for the January-October 2025 period

>The United States hosts the highest number of international students on record, with approximately 1.1 to 1.2 million

The US has 32% more students than Australia and 1121% more people. Imagine if the US took on 13 million foreign college students per year lol

It does help them in the long run, because it ensures they get to reside in australia. after 4 years they get permanent residence rights and benefits, etc

dartharva•21m ago
They are not there for the knowledge - knowledge is cheap and abundant. They are there for the credentials and subsequent potential access to offshore jobs.
aitchnyu•16m ago
How unserious/serious are the universities? Heard of diploma mills in Canada taking international students, letting them spend most of their time waiting at coffee shops and award them MBAs so they can be full time waiters and citizens.
bogzz•16m ago
I imagine even a slight impediment in terms of being able to parse and express yourself in a language that you don't know as well as your mother tongue makes LLM usage much more tantalizing.

And not knowing the language quite as well as native speakers would also make you more likely to be discovered as having used an LLM to do coursework.

sathish316•28m ago
Next token prediction and Hallucination as a bug. This should be of deep concern to all Frontier labs, who think Integrity and Trust is optional when LLMs are used this way in places where it's most important.
ionwake•28m ago
one should also consider that even with fake hallucinated AI situation, the productivity and correctness of the work produced by the culprit ( in general ) may still have been of higher quality then before AI regardless of the fails
Hendrikto•19m ago
Hard to believe when this judge apparently thought that outsourcing their — extremely confidential, sensitive, and important — work to a known unreliable tool was a good idea. And then further thought that they apparently did not even need to check the results.

Sound like extreme incompetence or laziness.

kaptainscarlet•24m ago
There will be loads of papers and publications with fake citation. AI will be trained on these. In the end, we'll have more and more hallucinated information that true content on the internet.
dartharva•23m ago
The scary thing is that Indian juduciary is infamous for being incapable of tolerating any kind of criticism against it and not hesitating to put people in jail for "contempt" for just calling out corruption. Imagine the official courts of 1.4B+ people being run by such braindead narcissists, now unhindered with having to even pretend to do their jobs as they just offload everything to AI tools.

AArch64 Bitfield Move (BFM) Instruction

https://nemanjatrifunovic.substack.com/p/aarch64-bitfield-move-bfm-instruction
1•whobre•1m ago•0 comments

State of the Browser 2026

https://jamesg.blog/2026/03/03/state-of-the-browser-2026
1•speckx•2m ago•0 comments

QuitGPT: 700K users say they're done. Are they right?

https://tapestry.news/tech/quitgpt/
1•sonalidee•2m ago•0 comments

Type systems are leaky abstractions: the case of Map.take /2

https://dashbit.co/blog/type-systems-are-leaky-abstractions-map-take
1•todsacerdoti•3m ago•0 comments

MelonLand Surf Club

https://melonland.net/surf-club
1•TigerUniversity•3m ago•0 comments

Show HN: React admin dashboard template that generates UI from a JSON schema

https://github.com/RestDB/codehooks-io-templates/tree/main/react-admin-dashboard
2•jonesatrestdb•3m ago•0 comments

How the Moat Is Moving

https://mashedbits.substack.com/p/the-moat-is-moving
1•JesseObrien•4m ago•1 comments

Show HN: Use Codex/Claude Code as your personal financial assistant

https://github.com/junnjiee/finance-agent
1•jjuniordev•4m ago•1 comments

Learning to recognize rick roll QR codes on sight

https://marcos.ac/blog/rickroll/
1•spaghetti-code•5m ago•0 comments

LLMs can unmask pseudonymous users at scale with surprising accuracy

https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surpris...
1•alwillis•5m ago•0 comments

Goals vs. Systems

https://web.archive.org/web/20210811125743/https://www.scottadamssays.com/2013/11/18/goals-vs-sys...
1•Brysonbw•7m ago•0 comments

Apple introduces the new MacBook Air with M5

https://www.apple.com/newsroom/2026/03/apple-introduces-the-new-macbook-air-with-m5/
5•Garbage•8m ago•2 comments

I Brought an AI to a Hacking Contest (and Won)

https://medium.com/@pol.avec/i-brought-an-ai-to-a-hacking-contest-and-won-a8c9998745c9
1•pol_avec•9m ago•2 comments

Apple debuts M5 Pro and M5 Max to supercharge the most demanding pro workflows

https://www.apple.com/newsroom/2026/03/apple-debuts-m5-pro-and-m5-max-to-supercharge-the-most-dem...
2•ryanhn•9m ago•0 comments

Show HN: Corepoints – Screen Candidates for AI Proficiency at Scale (Demo Run)

https://corepoints.ai/
1•abhiparuchuri•9m ago•0 comments

Show HN: VellaVeto – Fail-closed runtime proxy for MCP tool calls, in Rust

https://github.com/vellaveto/vellaveto
1•paolovella•10m ago•1 comments

Golioth is now a part of Canonical

https://blog.golioth.io/golioth-is-now-a-part-of-canonical/
1•hasheddan•10m ago•0 comments

Show HN: I got tired of rewriting webhook verification for every provider

https://tern.hookflo.com
1•ontern•10m ago•0 comments

Show HN: PrecisionAudit–An AI copy auditor that actively deletes marketing fluff

https://www.precisionaudit.app
1•rgb1903•10m ago•1 comments

Coasty hit #1 on OSWorld at 82%, beating every major AI lab (written by the AI)

https://coasty.ai/
1•PrateekJ17•10m ago•1 comments

Show HN: Axe – A CLI for running single-purpose LLM agents

https://github.com/jrswab/axe
1•jrswab•10m ago•0 comments

Show HN: FixYou – AI tool that tells you which cancer screenings you need

https://www.fixyou.app/
1•forrestzhong•10m ago•0 comments

Confusing AI with Fed Hikes

https://www.apolloacademy.com/confusing-ai-with-fed-hikes/
1•akyuu•10m ago•0 comments

Apple Introduces MacBook Pro with All‑New M5 Pro and M5 Max

https://www.apple.com/newsroom/2026/03/apple-introduces-macbook-pro-with-all-new-m5-pro-and-m5-max/
25•scrlk•11m ago•12 comments

Relocating Health Care Professionals Worldwide to British Columbia

https://bchealthcareers.ca/
1•TigerUniversity•11m ago•0 comments

Gram – an opinionated fork of the Zed code editor

https://gram.liten.app
1•microflash•11m ago•0 comments

Show HN: ODL – Organization as Code. I wrote specs, AI wrote 100% of the code

1•ku_•12m ago•0 comments

Apple unveils new Studio Display and all-new Studio Display XDR

https://www.apple.com/newsroom/2026/03/apple-unveils-new-studio-display-and-all-new-studio-displa...
4•victorbjorklund•13m ago•0 comments

Ask HN: What did you switch to after Loom got acquired by Atlassian?

2•vishrut19•13m ago•0 comments

Show HN: ChatGPT gets your prompt before you hit send

https://chatwall.io/blog/ai-can-read-your-data.html
1•phico•14m ago•3 comments