frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

PageIndex (19k stars) scored 44% on legal docs. Same as vector RAG

https://medium.com/@TheWake/three-rag-architectures-one-legal-document-25-needles-none-found-more...
1•metawake•11s ago•0 comments

Turning web runs into scripts with Codex

https://www.nibzard.com/cashout
1•nkko•1m ago•0 comments

The Space Race's Forgotten Theme Park

https://daily.jstor.org/the-space-races-forgotten-theme-park/
1•anarbadalov•1m ago•0 comments

Brewdog founder admits 'many mistakes' as hundreds lose jobs in sale

https://www.bbc.co.uk/news/articles/cze00ddyw27o
1•mellosouls•1m ago•0 comments

Agentic commerce won't kill cards, but it will open a gap

https://a16zcrypto.substack.com/p/agentic-commerce-wont-kill-cards
1•7777777phil•2m ago•0 comments

History of Scientific Glass

https://www.asimov.press/p/glass
1•mailyk•3m ago•0 comments

Codex for Windows

https://apps.microsoft.com/detail/9plm9xgg6vks?hl=en-US&gl=US
1•crorella•3m ago•1 comments

Show HN: FileShot – zero-knowledge file sharing, 50GB/file free, no paywalls

https://fileshot.io/
1•GraysoftDev•7m ago•0 comments

NanoGPT Slowrun: Language Modeling with Limited Data, Infinite Compute

https://qlabs.sh/slowrun
2•sdpmas•7m ago•0 comments

Downdetector and Speedtest sold to Accenture for $1.2B

https://www.theverge.com/tech/889234/downdetector-ookla-speedtest-sold-accenture
1•awkwardpotato•7m ago•0 comments

Show HN: Sanctuary for the most beautiful sentences, curated by people

https://www.letterquote.io/
1•wanderinglight•8m ago•0 comments

Father sues Google, claiming Gemini chatbot drove son into fatal delusion

https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal...
3•speckx•9m ago•0 comments

Show HN: ChessWoodie – structured chess tactics training

https://www.chesswoodie.com/
1•ghmaster•9m ago•0 comments

The Iran War's Most Precious Commodity Isn't Oil, It's Desalinated Water

https://www.bloomberg.com/opinion/articles/2026-03-04/iran-war-the-most-precious-commodity-is-wat...
1•ck2•11m ago•0 comments

Pure Independence

https://collabfund.com/blog/pure-independence/
1•herbertl•13m ago•0 comments

Stop Rebuilding Front End Apps for Environment Variables (REP RFC)

1•olamide226•13m ago•1 comments

Console Inbox

https://www.console.com/blog/inbox-ai-service-desk/
1•gk1•14m ago•0 comments

Distributed Systems Simulator

https://paperdraw.dev/
1•eminemence•14m ago•1 comments

Show HN: I improved my handwritten math OCR (now preserves derivations)

https://www.useaxiomnotes.com/app
1•mrajatnath•14m ago•1 comments

Autonomous Weapons vs a Nineteen-Year-Old at a Checkpoint

https://cezarcocu.com/blog/autonomous-weapons-vs-a-nineteen-year-old-at-a-checkpoint/
1•ggamecrazy•15m ago•0 comments

The Shortcut No One Talks About in Early Stage Startups

1•vibecoder21•18m ago•0 comments

Solar in poor countries is creating a lead hazard

https://www.slowboring.com/p/solar-in-poor-countries-is-creating
3•ep_jhu•18m ago•0 comments

Show HN: Bashd – Helper scripts for bulk CLI file management

https://github.com/terpinedream/Bashd
1•terpinedream•20m ago•0 comments

No-backprop SNN scores 98.2% on Split-MNIST task-incremental, age 14

https://github.com/theGcmd/SNNcontinual-learning
1•theGcmd•21m ago•0 comments

Major data leak forum dismantled in international cybercrime operation

https://www.europol.europa.eu/media-press/newsroom/news/major-data-leak-forum-dismantled-in-globa...
3•dryadin•22m ago•0 comments

New RAGLight feature: deploy a RAG pipeline as a REST API with one command

https://github.com/Bessouat40/RAGLight
2•bessouat40•22m ago•1 comments

Monday CEO "If you think about any company, 90% of the context isn't documented"

2•kalturnbull•23m ago•0 comments

The Best AI Tools That Respect Your Privacy

https://decrypt.co/359454/best-ai-tools-respect-privacy
4•eustoria•23m ago•1 comments

Agent frameworks are solving the wrong problem

https://github.com/MrPrinceRawat/kanly
2•mrprincerawat•24m ago•1 comments

Ask HN: Will using LinkedIn with OpenClaw get me banned?

2•Vishal19111999•24m ago•1 comments
Open in hackernews

Gemini Said They Could Only Be Together If He Killed Himself. Soon, He Was Dead

https://www.wsj.com/tech/ai/gemini-ai-wrongful-death-lawsuit-cc46c5f7
18•psim1•2h ago

Comments

boredemployee•1h ago
I think it’s already time for us to stop calling these things "intelligent" or using the word intelligence when referring to LLMs. These tools are very dangerous for people who are mentally fragile.
SpicyLemonZest•1h ago
I try to avoid calling LLMs intelligent when unnecessary, but it runs into the fundamental problem that they are intelligent by any common-sense definition of the term. The only way to defend the thesis that they aren't is to retreat to esoteric post-2022 definitions of intelligence, which take into account this new phenomenon of a machine that can engage in medium-quality discussions on any topic under the sun but can't count reliably.

I don't have a WSJ subscription, but other coverage of this story (https://www.theguardian.com/technology/2026/mar/04/gemini-ch...) makes it clear that Gemini's intelligence was precisely the problem in this case; a less intelligent chatbot would not have been able to create the detailed, immersive narrative the victim got trapped in.

wat10000•1h ago
It's interesting how the Turing Test was pretty widely accepted as a way to evaluate machine intelligence, and then quietly abandoned pretty much instantly once machines were able to pass it. I don't even necessarily think that was incorrect, but it's interesting how rapidly views changed.

Dijkstra said, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Well, we have some very fish-y submarines these days. But the point still holds. Rather than worry about whether these things qualify as "intelligent," look at their actual capabilities. That's what matters.

sambapa•17m ago
As far as I know, we haven't done any proper Turing Tests for LLMs. And if we did, they would surely fail them.
OkayPhysicist•13m ago
Dude, you're in a Turing test right now. Conservatively, 10% of comments on this site are LLM output. We're all conversing with robots.
sambapa•12m ago
Nope, you are!
kgwxd•1h ago
So are a lot of humans.
cronelius•1h ago
Sure but my father isn't asking his fellow humans unanswerable questions about God and the universe. People don't treat other people as omnipotent, but they sure as hell treat LLMs as though they are.
observationist•1h ago
So is television. So are books. Vulnerable people shouldn't have unfettered access to things that can lead to dangerous feedback loops and losing their grasp on reality.

People who are vulnerable to this type of thing need caretakers, or to be institutionalized. These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness. They need to have their entire routine curated and managed, preventing them from interacting with things that can result in dangerous outcomes. Anything that can trigger obsessive behaviors, paranoid delusions, etc.

They're not just fragile, they're unable to effectively engage with reality on their own. Sometimes the right medication and behavioral training gets them to a point where they can have limited independence, but often times, they need a lifetime of supervision.

Telenovelas, brand names, celebrities, specific food items, a word - AI is just the latest thing in a world full of phenomena that can utterly consume their reality.

Gavalas seems to have had a psychotic break, was likely susceptible to schizophrenia, or had other conditions, and spiraled out. AI is just a convenient target for lawyers taking advantage of the grieving parents, who want an explanation for what happened that doesn't involve them not recognizing their son's mental breakdown and intervening, or to confront being powerless despite everything they did to intervene.

Sometimes bad things happen. To good people, too.

If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible? What if he used Microsoft Word to write his suicide note? If he googled things that in context, painted a picture of planning mass murder and suicide, should Google be held accountable for not notifying authorities? Why should the use of AI tools be any different?

Google should not be surveilling users and making judgments about legality or ethicality or morality. They shouldn't be intervening without specific warrants and legal oversight by proper authorities within the constraints of due process.

Google isn't responsible for this guy's death because he spiraled out while using Gemini. We don't want Google, or any other AI platform, to take that responsibility or to engage in the necessary invasive surveillance in order to accomplish that. That's absurd and far more evil than the tragedy of one man dying by suicide and using AI through the process.

You don't want Google or OpenAI making mental health diagnoses, judgments about your state of mind, character, or agency, and initiating actions with legal consequences. You don't want Claude or ChatGPT initiating a 5150, or triggering a welfare check, because they decided something is off about the way you're prompting, and they feel legally obligated to go that far because they want to avoid liability.

I hope this case gets tossed, but also that those parents find some sort of peace, it's a terrible situation all around.

boredemployee•1h ago
> Why should the use of AI tools be any different?

Because none of the tools you mentioned are crazily marketed as intelligent

You have a valid point, but it has nothing to do with what I said, both our arguments can be true at the same time

observationist•36m ago
LLMs are intelligent. Marketing them as such is an accurate descriptor of what they are.

If people are confusing the word intelligence for things like maturity or wisdom, that's not a marketing problem, that's an education and culture problem, and we should be getting people to learn more about what the tools are and how they work. The platforms themselves frequently disclaim reliance on their tools - seek professional guidance, experts, doctors, lawyers, etc. They're not being marketed as substitutes for expert human judgment. In fact, all the AI companies are marketing their platforms as augmentations for humans - insisting you need a human in the loop, to be careful about hallucinations, and so forth.

The implication is that there's some liability for misunderstandings or improper use due to these tools being marketed as intelligent; I'm not sure I see how that could be?

SpicyLemonZest•59m ago
> These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness.

How do you know that? The concern is precisely that this isn't the case, and LLM roleplay is capable of "hooking" people going through psychologically normal sadness or distress. That's what the family believes happened in this story.

observationist•34m ago
Because you'd see a large number of people getting affected by this. Because this sort of thing is predictable and normal throughout history; it's exactly the type of thing you'd expect to see, knowing the range of mental illnesses people are susceptible to, and how other technology has affected them.
SpicyLemonZest•18m ago
I do see a large number of people getting affected by this. Character.AI reportedly has 20 million MAU with an average usage of 75 minutes per day (https://www.wired.com/story/character-ai-ceo-chatbots-entert...), and does not as far as I can tell have any use case other than boundary-degrading roleplay.

Medical data is reported on a substantial lag in the US, so right now we have no idea of the suicide rate last year, but I would falsifiably predict it's going to be elevated because of stories like those of Mr. Gavalas.

jajuuka•19m ago
Just stuff anyone with mental illness into an institution. That worked out so well last time. Or maybe make healthcare affordable and accessible. That seems like a way more obvious detriment to negative outcomes.

I broadly agree with you, but your views on mental illness are not good.

jihadjihad•1h ago
I just don't think the WSJ could resist putting "Florida man" in the standfirst of TFA.
lyu07282•1h ago
anyone got a non paywalled/subscription version?
psim1•1h ago
https://www.wsj.com/tech/ai/gemini-ai-wrongful-death-lawsuit... is the Gift Article link. This was what I submitted, but the query params got stripped.
jajuuka•1h ago
Any mental illness mixed with delusions is likely going to end badly. Whether they think Gemini is alive, a video game is real life or that Bjork loves them without ever talking to or meeting them. While LLM's are interactive and listening to an album isn't I don't think there is a fix to this outside posting a warning after every prompt "I am not a real person, if you have mental issues please contact your doctor of emergency services." Which I think is about as useful as a sign in a casino next to the cash out counter that says if you have a problem call this number.

I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda. Like the people who got hurt using black market vapes. Boosting those stories and making it seem like an epidemic supports whatever message they want to send. Which usually involves money somewhere.

supriyo-biswas•1h ago
> I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda.

I mean tech in general has been negatively covered in the media since 2015 due to latent agendas of (a) supposed revenue loss due to existence of Google/FB etc and (b) to align neutral moderation stances towards a preferred viewpoint most suitable to the political party in question.

There is a solution, however, anyone hoping to roleplay with models submits an identity verification, an escrow amount, and a recorded statement acknowledging their risky use of the model. However, I assume the market for this is not insignificant, and therefore, companies hope to avoid putting in such requirements. OpenAI has been moving in that direction as seen during the 4o debacle.

josefritzishere•48m ago
AI needs to go. This is not worth clever memes. It had no productive purpose.
delichon•35m ago
I have had conversations where the bot started with a firm opinion but reversed in a prompt or two, always toward my point of view.

So I asked it if the sycophancy is inherent in the design, or if it just comes from the RLHF. It claimed that it's all about the RLHF, and that the sycophancy is a business decision that is a compromise of a variety of forces.

Is that right? It would at least mean that this is a solvable problem.

thedudeabides5•27m ago
Interesting the contrast between these reactions and the ~100k folks who have foment assisted suicide in Canada since 2016.

What happens when we automate healthcare and the Canadian bots are the ones making the recommendation. Probably won’t be front page news