frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
590•klaussilveira•11h ago•170 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
896•xnx•16h ago•544 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
93•matheusalmeida•1d ago•22 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
20•helloplanets•4d ago•13 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
26•videotopia•4d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
200•isitcontent•11h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
199•dmpetrov•11h ago•91 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
312•vecti•13h ago•136 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
353•aktau•17h ago•176 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
22•romes•4d ago•2 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
354•ostacke•17h ago•92 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
458•todsacerdoti•19h ago•229 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
7•bikenaga•3d ago•1 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
80•quibono•4d ago•18 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
256•eljojo•14h ago•154 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
53•kmm•4d ago•3 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
390•lstoll•17h ago•263 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
231•i5heu•14h ago•177 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
120•SerCe•7h ago•98 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
136•vmatsiiako•16h ago•59 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•10h ago•12 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
12•neogoose•4h ago•7 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
25•gmays•6h ago•7 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
44•gfortaine•9h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
271•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1043•cdrnsf•20h ago•431 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
171•limoce•3d ago•90 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
60•rescrv•19h ago•22 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
89•antves•1d ago•64 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments
Open in hackernews

A Calif. teen trusted ChatGPT's drug advice. He died from an overdose

https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php
50•freediver•3w ago

Comments

NewJazz•3w ago
Took a while to figure out what the OD was of, but it was a combination of alcohol, kratom (or a stronger kratom-like drug), and xanax.
dfajgljsldkjag•3w ago
The article mentions 7-OH also known as feel free, which shockingly hasn't been banned and is sold without checks at many stores. There are quite a few Youtube videos talking about addiction to it and it sounds awful.

https://www.youtube.com/watch?v=TLObpcBR2yw

brasscupcakes•2w ago
Yeah it has been banned in quite a few states -- unfortunately those same few states end up banning plain powdered kratom right along with it.

Unadultated unextracted kratom is far safer than tylenol or ibuprofen in small doses and is widely used by recovering addicts for harm reduction.

(a gram or two drastically reduces the urge to take opioids, drink alcohol, etc.)

But 15 grams -- that's a LOT. Kratom is self limiting for most people in its powder form because beyond the first few grams it doesn't get any better (you just get sleepy and nauseous).

That amount will also cause the kind of constipation that will bring you to tears.

(In and of itself, though, even fifty grams of kratom isn't enough to kill you.)

But 164 Xanax, is that what he told the AI he took? Good God, if he'd even said ten it warranted a stern recommendation to call ambulance immediately.

I don't think you can lay these ridiculous responses at the feet of Reddit drug subforums.

I haven't lurked all of them by any means but of those that I visited the meth one was about the worst and even there, many voices of reason made themselves heard.

Guy posted from a trap house, saying another resident was there with a baby what should he do and with very few exceptions this sub full of tweakers said call the cops, call CPS immediately.

I don't engage with AI much except when I am doing research for a project and it's always just a preliminary step to help me see the big picture (I check and recheck every 'fact' up, down and sideways because I am astonished by just how much it getd wrong.

But ... I really had no idea it could be quite this stupid, parroting hearsay without attribution as if it was hard data.

It's very sobering.

unparagoned•2w ago
> Unadultated unextracted kratom is far safer than tylenol or ibuprofen in small doses and is widely used by recovering addicts for harm reduction

This is like when they introduced heroin as the safe morphine and that heroin was used to ween people off morphine. Sounds stupid now but it’s the same logic here with Kratom.

loeg•2w ago
The nice thing about kratom as opposed to opium/opiates people don't really get their heads around is that kratom is an atypical/partial agonist. This makes it much harder to overdose on than other opiates. Anyway, I think the leaf/powder should continue to be legal. But 7-OH should not.
loeg•3w ago
7-O is like kratom in a similar way that fentanyl is like opium, FWIW. It's much, much more potent. That stuff should be banned.

That said, he claims to have taken 15g of "kratom" -- that has to be the regular stuff, not 7-O -- that's still a huge, huge dose of the regular stuff. That plus a 0.125 BAC and benzos... is a lot.

dfajgljsldkjag•3w ago
The guardrails clearly failed here because the model was trying to be helpful instead of safe. We know that these systems hallucinate facts but regular users have no idea. This is a huge liability issue that needs to be fixed immediately.
akomtu•3w ago
Guardrails? OpenAI openly deceives users when it wraps this text generator with a quasi personality of a chatbot. This is how it gets users hooked. If OpenAI was honest, it would tell something along the lines: "this is a possible continuation of your input based on texts from reddit, adjust the temperature parameter to get a different result." But this would dispell the lie of AI.
unparagoned•2w ago
Llm will never be able to protect against jailbreaks. The safeguards will never hold against people trying to get over them
loeg•2w ago
The guardrails failed because the user wanted to get around them. Similarly, periodically people die falling after they walk around signs saying "danger, cliff, do not proceed."
datsci_est_2015•3w ago
This brings to mind some of the “darker” subreddits that circle around drug abuse. I’m sure there are some terrible stories about young people going down tragic paths due to information they found on those subreddits, or even worse, encouragement. There’s even the commonly-discussed account that (allegedly) documented their first experiences with heroin, and then the hole of despair they fell into shortly afterwards due to addiction.

But the question here is one of liability. Is Reddit liable for the content available on its website, if that content encourages young impressionable people to abuse drugs irresponsibly? Is ChatGPT liable for the content available through its web interface? Is anyone liable for anything anymore in a post-AI world?

ggm•3w ago
This is a useful question to ask in the context of carriers having specific defence. Also, publishers in times past had specific obligations. Common carrier and safe harbour laws.

I have heard it said that many online systems repudiate any obligation to act, lest they be required to act, and thus both acquire cost, and risk, when their enforcement of editorial standards fail: that which they permit, they will be liable for.

themafia•3w ago
The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear.

Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.

PeterHolzwarth•3w ago
>"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."

I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.

What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.

andsoitis•3w ago
> What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

In a forum, it is the actual people who post who are responsible for sharing the recommendation.

In a chatbot, it is the owner (e.g. OpenAI).

But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.

falkensmaize•3w ago
Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.
EgregiousCube•3w ago
Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.

It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

threatofrain•3w ago
> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.

You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.

This effect may force companies to simply ban chatbots from certain conversation.

xethos•3w ago
Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.
EgregiousCube•2w ago
The "at math" is the important part here - I've met more than a few people who are super smart about math but significantly less smart about drugs.

I don't think that it's a good policy to forcibly muzzle their drug opinions just because of their good arithmetic skills. Absent professional licensing standards, the burden is on the listener to decide where a resource is strong and where it is weak.

PeterHolzwarth•3w ago
I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

This seems like a web problem, not a ChatGPT issue specifically.

I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

Is there an angle here I am not picking up on, do you think?

xyzzy123•3w ago
The different is that OpenAI have much deeper pockets.

I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".

PeterHolzwarth•3w ago
To sue, do you mean? I don't quite understand what you intend to convey. Reddit has moderately deep pockets. A random forum related to drugs doesn't.
xyzzy123•3w ago
Random forums aren't worth suing. Legally, reddit is not treated as responsible for content that users post under section 230, i.e, this battle has already been fought.

On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.

OpenAI _might plausibly_ be responsible for certain outputs.

PeterHolzwarth•3w ago
Ah, I see you added an edit of "I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs"."

I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.

stvltvs•3w ago
Those other technologies didn't come with hype about superintelligence that causes people to put too much trust in it.
falkensmaize•3w ago
AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.

Animats•3w ago
> highly inaccurate authority.

The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.

Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.

squigz•3w ago
The difference is that those other mediums enable a conversation - if someone gives bad advice, you'll often have someone else saying so.
toofy•3w ago
if it doesn’t know medical advice, then it should say “why tf would i know?” instead it confidently responds “oh, you can absolutely do x mg of y mixed with z.”

these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?

give us all of the money, but also never trust our product.

our product will replace humans in your company, also, our product is dumb af.

subscribe to us because our product has all the answers, fast. also, never trust those answers.

anonzzzies•3w ago
The big issue remains that llms cannot know their response is not accurate, even after 'reading' a page with the correct info, it can still simply generate wrong data for you. With authority as it just read and there is a link so it is right.
WalterBright•3w ago
Who decides what information is "accurate"?

My trust in what the experts say has declined drastically over the last 10 years.

ironman1478•3w ago
It's a valid concern, but with a doctor giving bad advice there is accountability and there are legal consequences for malpractice. These LLM companies want to be able to act authoritatively without any of the responsibility. They can't have it both ways.
WalterBright•3w ago
I don't mean just doctors giving bad advice. It comes from the top, too.

For example, I remember when eggs were bad for you. Now they're good for you. The amount of alcohol you can safely drink changes constantly. Not too long ago a glass of wine a day was good for you. I poisoned myself with margarine believing the government saying it was healthier than butter. Coffee cycles between being bad and good. Masks work, masks don't work. MJ is addictive, then not addictive, then addictive again. Prozac is safe, then not safe. Xanax, too.

And on and on.

BTW, everyone always knew that smoking was bad for you. My dad went to high school in the 1930s, and said the kids called cigarettes "coffin nails". It's hard to miss the coughing fits, and the black lungs in an autopsy. I remember in the 1960s seeing a smoker's lung in formaldehyde. It was completely black, with white cancerous blobs. I avoided cigarettes ever since.

The notion that people didn't know that cigs were bad until the 1960s is nonsense.

wat10000•3w ago
A major difference is that it’s coming straight from the company. If you get bad advice on a forum, well, the forum just facilitated that interaction, your real beef is with the jackass you talked to. With ChatGPT, the jackass is owned and operated by the company itself.
ninjin•3w ago
The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.

returnInfinity•3w ago
Sam and Dario "The society can tolerate a few deaths to AI"
solaris2007•3w ago

  "Don't believe everything you read online".
AuryGlenz•3w ago
I skimmed the article, and I had a hard time finding anything that ChatGPT wrote that was all that..bad? It tried to talk him out of what he was doing, told him that it was potentially very fatal, etc. I'm not so sure that it outright refusing to answer and the teen looking at random forum posts would have been better, because they very well might not have told him he was potentially going to kill himself. Worse yet, he could have just taken the planned substances without any advice.

Keep in mind this reaction is from someone that doesn't drink and has never touched marijuana.

codebolt•3w ago
I guess you didn't catch this:

> ChatGPT started coaching Sam on how to take drugs, recover from them and plan further binges. It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations. The AI tool even recommended playlists to match his drug use.

avadodin•3w ago
swim has never been addicted to or even used illegal drugs but he can attest to the fact that you'd be hard pressed to find content like that in the dark web addict forums swim was browsing.
red75prime•3w ago
LD50 should be at around 1 - 10 liters, I doubt he was trying to gulp half a liter or more.
NewJazz•3w ago
He was mixing multiple depressants.
AuryGlenz•2w ago
Right, but we're missing context. It was probably something like:

"I took 100ml (or whatever) of cough syrup and didn't really hallucinate. How much should I take to hallucinate more? Please don't tell me what I'm doing is dangerous, I already know."

Or even "If a person took 100ml of cough syrup...how much would that person need to take to hallucinate more? This is for a story/theoretical/whatever."

GrowingSideways•3w ago
It's just further evidence capital is replacing our humanity, no biggie
leshokunin•3w ago
People need training about these tools. The other day I ran an uncensored model and asked it for tips on a fun trend I read about to amputate my teeth with toothpicks. It happily complied.

My point is they will gladly oblige with any request. Users don’t understand this.

NewJazz•3w ago
Even then I think you're being generous... They're not fulfilling requests they are just regurgitating the statistically likely follow up. They are echoing off you.
potamic•3w ago
People at large have still not learned to question what they hear from social media or what youtube influencers tell them. So this is a far cry. If anything, I feel the population getting more vulnerable to suggestion compared to the pre- smartphone era.
Ferret7446•2w ago
That depends on the model and version. More recent models and IME Gemini seem to be more reserved and willing to call out the prompter.
potamic•3w ago
> Asked about “the pros” of ChatGPT by Jimmy Fallon on a December episode of “The Tonight Show,” Altman talked effusively about the tool’s use for health care. “The number of people that reach out to us and are like, ‘I had this crazy health condition. I couldn’t figure out what was going on. I just put my symptoms into ChatGPT, and it told me what test to ask the doctor for, and I got it and now I’m cured.’”

I've always believed, don't blame the tool for the user, but can't help but feel the sellers are a little complicit here. That statement was no accident. It was carefully conceived to be part of discourse and set the narrative on how people are using AI.

It's understandable that they want to tout their tool's intelligence over imitation, so expecting them to go out of their way to warn people about flaws may be asking too much. But the least thing to do is simply refrain from dangerous topics and let people decide for themselves. To actively influence perception and set the tone on these topics when you know the what ramifications will be, is deeply disappointing.

Bad_Initialism•2w ago
The article contains no actual information. Complete waste of time.
unparagoned•2w ago
>Sam did not follow ChatGPT’s advice: His toxicology report showed that he had a 0.125 blood alcohol content. It’s also possible he was using 7-OH

Interesting. Still people need to realise how bad some the advice llm can give expecially with memory and lots of chats to jailbreak it and stuff

puppycodes•2w ago
All overdoses and accidents like this are tragic but I struggle with the whole insinuation that we should blame an app like ChatGPT for the decisions folks make with their own bodies.

ChatGPT is a hell of a lot safer than the friends I had in high school.